Jul 15 23:57:48.929210 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 22:01:05 -00 2025 Jul 15 23:57:48.929249 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:57:48.929258 kernel: BIOS-provided physical RAM map: Jul 15 23:57:48.929265 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 15 23:57:48.929272 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 15 23:57:48.929278 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 15 23:57:48.929286 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 15 23:57:48.929295 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 15 23:57:48.929305 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 23:57:48.929311 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 15 23:57:48.929318 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 23:57:48.929325 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 15 23:57:48.929331 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 23:57:48.929338 kernel: NX (Execute Disable) protection: active Jul 15 23:57:48.929348 kernel: APIC: Static calls initialized Jul 15 23:57:48.929355 kernel: SMBIOS 2.8 present. Jul 15 23:57:48.929365 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 15 23:57:48.929373 kernel: DMI: Memory slots populated: 1/1 Jul 15 23:57:48.929380 kernel: Hypervisor detected: KVM Jul 15 23:57:48.929387 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 23:57:48.929394 kernel: kvm-clock: using sched offset of 4893590978 cycles Jul 15 23:57:48.929402 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 23:57:48.929409 kernel: tsc: Detected 2794.750 MHz processor Jul 15 23:57:48.929419 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 23:57:48.929427 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 23:57:48.929434 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 15 23:57:48.929442 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 15 23:57:48.929449 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 23:57:48.929456 kernel: Using GB pages for direct mapping Jul 15 23:57:48.929463 kernel: ACPI: Early table checksum verification disabled Jul 15 23:57:48.929471 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 15 23:57:48.929478 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:57:48.929488 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:57:48.929495 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:57:48.929507 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 15 23:57:48.929514 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:57:48.929529 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:57:48.929543 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:57:48.929561 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:57:48.929572 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 15 23:57:48.929585 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 15 23:57:48.929593 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 15 23:57:48.929600 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 15 23:57:48.929608 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 15 23:57:48.929615 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 15 23:57:48.929623 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 15 23:57:48.929640 kernel: No NUMA configuration found Jul 15 23:57:48.929648 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 15 23:57:48.929655 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 15 23:57:48.929663 kernel: Zone ranges: Jul 15 23:57:48.929670 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 23:57:48.929678 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 15 23:57:48.929686 kernel: Normal empty Jul 15 23:57:48.929693 kernel: Device empty Jul 15 23:57:48.929700 kernel: Movable zone start for each node Jul 15 23:57:48.929710 kernel: Early memory node ranges Jul 15 23:57:48.929718 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 15 23:57:48.929725 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 15 23:57:48.929733 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 15 23:57:48.929740 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 23:57:48.929748 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 23:57:48.929755 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 15 23:57:48.929764 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 23:57:48.929777 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 23:57:48.929787 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 23:57:48.929800 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 23:57:48.929810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 23:57:48.929823 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 23:57:48.929846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 23:57:48.929867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 23:57:48.929876 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 23:57:48.929886 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 23:57:48.929896 kernel: TSC deadline timer available Jul 15 23:57:48.929906 kernel: CPU topo: Max. logical packages: 1 Jul 15 23:57:48.929921 kernel: CPU topo: Max. logical dies: 1 Jul 15 23:57:48.929930 kernel: CPU topo: Max. dies per package: 1 Jul 15 23:57:48.929938 kernel: CPU topo: Max. threads per core: 1 Jul 15 23:57:48.929945 kernel: CPU topo: Num. cores per package: 4 Jul 15 23:57:48.929952 kernel: CPU topo: Num. threads per package: 4 Jul 15 23:57:48.929960 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 15 23:57:48.929968 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 23:57:48.929975 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 23:57:48.929982 kernel: kvm-guest: setup PV sched yield Jul 15 23:57:48.929993 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 15 23:57:48.930000 kernel: Booting paravirtualized kernel on KVM Jul 15 23:57:48.930008 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 23:57:48.930016 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 15 23:57:48.930023 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 15 23:57:48.930031 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 15 23:57:48.930038 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 23:57:48.930045 kernel: kvm-guest: PV spinlocks enabled Jul 15 23:57:48.930053 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 23:57:48.930064 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:57:48.930072 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:57:48.930079 kernel: random: crng init done Jul 15 23:57:48.930087 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:57:48.930095 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 23:57:48.930102 kernel: Fallback order for Node 0: 0 Jul 15 23:57:48.930109 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 15 23:57:48.930117 kernel: Policy zone: DMA32 Jul 15 23:57:48.930127 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:57:48.930134 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 23:57:48.930142 kernel: ftrace: allocating 40095 entries in 157 pages Jul 15 23:57:48.930149 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 23:57:48.930157 kernel: Dynamic Preempt: voluntary Jul 15 23:57:48.930164 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:57:48.930172 kernel: rcu: RCU event tracing is enabled. Jul 15 23:57:48.930180 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 23:57:48.930188 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:57:48.930202 kernel: Rude variant of Tasks RCU enabled. Jul 15 23:57:48.930210 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:57:48.930217 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:57:48.930253 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 23:57:48.930262 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:57:48.930269 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:57:48.930277 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:57:48.930284 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 23:57:48.930292 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:57:48.930310 kernel: Console: colour VGA+ 80x25 Jul 15 23:57:48.930318 kernel: printk: legacy console [ttyS0] enabled Jul 15 23:57:48.930326 kernel: ACPI: Core revision 20240827 Jul 15 23:57:48.930336 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 23:57:48.930344 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 23:57:48.930352 kernel: x2apic enabled Jul 15 23:57:48.930363 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 23:57:48.930371 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 15 23:57:48.930379 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 15 23:57:48.930390 kernel: kvm-guest: setup PV IPIs Jul 15 23:57:48.930397 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 23:57:48.930406 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 23:57:48.930413 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 23:57:48.930422 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 23:57:48.930429 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 23:57:48.930437 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 23:57:48.930445 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 23:57:48.930455 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 23:57:48.930463 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 23:57:48.930471 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 23:57:48.930479 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 23:57:48.930487 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 23:57:48.930495 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 23:57:48.930503 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 15 23:57:48.930511 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 15 23:57:48.930521 kernel: x86/bugs: return thunk changed Jul 15 23:57:48.930529 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 15 23:57:48.930537 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 23:57:48.930545 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 23:57:48.930552 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 23:57:48.930560 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 23:57:48.930568 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 15 23:57:48.930576 kernel: Freeing SMP alternatives memory: 32K Jul 15 23:57:48.930584 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:57:48.930594 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:57:48.930601 kernel: landlock: Up and running. Jul 15 23:57:48.930609 kernel: SELinux: Initializing. Jul 15 23:57:48.930617 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:57:48.930635 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:57:48.930643 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 23:57:48.930651 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 23:57:48.930659 kernel: ... version: 0 Jul 15 23:57:48.930667 kernel: ... bit width: 48 Jul 15 23:57:48.930678 kernel: ... generic registers: 6 Jul 15 23:57:48.930686 kernel: ... value mask: 0000ffffffffffff Jul 15 23:57:48.930694 kernel: ... max period: 00007fffffffffff Jul 15 23:57:48.930701 kernel: ... fixed-purpose events: 0 Jul 15 23:57:48.930709 kernel: ... event mask: 000000000000003f Jul 15 23:57:48.930717 kernel: signal: max sigframe size: 1776 Jul 15 23:57:48.930724 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:57:48.930732 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:57:48.930740 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:57:48.930751 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:57:48.930758 kernel: smpboot: x86: Booting SMP configuration: Jul 15 23:57:48.930766 kernel: .... node #0, CPUs: #1 #2 #3 Jul 15 23:57:48.930774 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 23:57:48.930783 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 23:57:48.930798 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 136904K reserved, 0K cma-reserved) Jul 15 23:57:48.930811 kernel: devtmpfs: initialized Jul 15 23:57:48.930822 kernel: x86/mm: Memory block size: 128MB Jul 15 23:57:48.930833 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:57:48.930848 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 23:57:48.930856 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:57:48.930864 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:57:48.930872 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:57:48.930880 kernel: audit: type=2000 audit(1752623865.344:1): state=initialized audit_enabled=0 res=1 Jul 15 23:57:48.930887 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:57:48.930895 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 23:57:48.930903 kernel: cpuidle: using governor menu Jul 15 23:57:48.930911 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:57:48.930921 kernel: dca service started, version 1.12.1 Jul 15 23:57:48.930929 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 15 23:57:48.930936 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 15 23:57:48.930944 kernel: PCI: Using configuration type 1 for base access Jul 15 23:57:48.930952 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 23:57:48.930960 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:57:48.930968 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:57:48.930975 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:57:48.930983 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:57:48.930993 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:57:48.931001 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:57:48.931009 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:57:48.931016 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 23:57:48.931024 kernel: ACPI: Interpreter enabled Jul 15 23:57:48.931032 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 23:57:48.931039 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 23:57:48.931047 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 23:57:48.931055 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 23:57:48.931065 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 23:57:48.931073 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 23:57:48.931348 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:57:48.931480 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 23:57:48.931601 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 23:57:48.931612 kernel: PCI host bridge to bus 0000:00 Jul 15 23:57:48.931760 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 23:57:48.931882 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 23:57:48.932028 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 23:57:48.932231 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 15 23:57:48.932363 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 23:57:48.932473 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 15 23:57:48.932583 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 23:57:48.932781 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:57:48.932952 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 23:57:48.933119 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 15 23:57:48.933277 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 15 23:57:48.933402 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 15 23:57:48.933523 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 23:57:48.933675 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 23:57:48.933804 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 15 23:57:48.933925 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 15 23:57:48.934046 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 15 23:57:48.934194 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 15 23:57:48.934371 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 15 23:57:48.934495 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 15 23:57:48.934617 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 15 23:57:48.934778 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 23:57:48.934918 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 15 23:57:48.935050 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 15 23:57:48.935179 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 15 23:57:48.935349 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 15 23:57:48.935491 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 15 23:57:48.935613 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 23:57:48.935767 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 15 23:57:48.935889 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 15 23:57:48.936010 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 15 23:57:48.936150 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 15 23:57:48.936293 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 15 23:57:48.936304 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 23:57:48.936313 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 23:57:48.936325 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 23:57:48.936333 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 23:57:48.936341 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 23:57:48.936349 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 23:57:48.936357 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 23:57:48.936365 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 23:57:48.936372 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 23:57:48.936380 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 23:57:48.936388 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 23:57:48.936398 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 23:57:48.936406 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 23:57:48.936414 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 23:57:48.936421 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 23:57:48.936429 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 23:57:48.936437 kernel: iommu: Default domain type: Translated Jul 15 23:57:48.936445 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 23:57:48.936453 kernel: PCI: Using ACPI for IRQ routing Jul 15 23:57:48.936461 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 23:57:48.936471 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 15 23:57:48.936478 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 15 23:57:48.936601 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 23:57:48.936732 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 23:57:48.936859 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 23:57:48.936870 kernel: vgaarb: loaded Jul 15 23:57:48.936878 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 23:57:48.936886 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 23:57:48.936898 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 23:57:48.936906 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:57:48.936914 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:57:48.936922 kernel: pnp: PnP ACPI init Jul 15 23:57:48.937067 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 23:57:48.937080 kernel: pnp: PnP ACPI: found 6 devices Jul 15 23:57:48.937088 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 23:57:48.937096 kernel: NET: Registered PF_INET protocol family Jul 15 23:57:48.937108 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:57:48.937116 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 23:57:48.937124 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:57:48.937132 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 23:57:48.937140 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 23:57:48.937148 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 23:57:48.937155 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:57:48.937163 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:57:48.937171 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:57:48.937182 kernel: NET: Registered PF_XDP protocol family Jul 15 23:57:48.937317 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 23:57:48.937433 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 23:57:48.937543 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 23:57:48.937663 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 15 23:57:48.937798 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 23:57:48.937935 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 15 23:57:48.937954 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:57:48.937969 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 23:57:48.937977 kernel: Initialise system trusted keyrings Jul 15 23:57:48.937985 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 23:57:48.937993 kernel: Key type asymmetric registered Jul 15 23:57:48.938001 kernel: Asymmetric key parser 'x509' registered Jul 15 23:57:48.938010 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 23:57:48.938020 kernel: io scheduler mq-deadline registered Jul 15 23:57:48.938031 kernel: io scheduler kyber registered Jul 15 23:57:48.938040 kernel: io scheduler bfq registered Jul 15 23:57:48.938054 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 23:57:48.938063 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 23:57:48.938071 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 23:57:48.938080 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 23:57:48.938087 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:57:48.938096 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 23:57:48.938104 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 23:57:48.938112 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 23:57:48.938119 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 23:57:48.938362 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 23:57:48.938377 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 23:57:48.938494 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 23:57:48.938608 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T23:57:48 UTC (1752623868) Jul 15 23:57:48.938735 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 23:57:48.938747 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 15 23:57:48.938755 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:57:48.938762 kernel: Segment Routing with IPv6 Jul 15 23:57:48.938775 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:57:48.938783 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:57:48.938791 kernel: Key type dns_resolver registered Jul 15 23:57:48.938798 kernel: IPI shorthand broadcast: enabled Jul 15 23:57:48.938806 kernel: sched_clock: Marking stable (3444005900, 232360265)->(3753462161, -77095996) Jul 15 23:57:48.938814 kernel: registered taskstats version 1 Jul 15 23:57:48.938822 kernel: Loading compiled-in X.509 certificates Jul 15 23:57:48.938830 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: cfc533be64675f3c66ee10d42aa8c5ce2115881d' Jul 15 23:57:48.938837 kernel: Demotion targets for Node 0: null Jul 15 23:57:48.938847 kernel: Key type .fscrypt registered Jul 15 23:57:48.938855 kernel: Key type fscrypt-provisioning registered Jul 15 23:57:48.938863 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 23:57:48.938871 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:57:48.938879 kernel: ima: No architecture policies found Jul 15 23:57:48.938886 kernel: clk: Disabling unused clocks Jul 15 23:57:48.938894 kernel: Warning: unable to open an initial console. Jul 15 23:57:48.938902 kernel: Freeing unused kernel image (initmem) memory: 54424K Jul 15 23:57:48.938910 kernel: Write protecting the kernel read-only data: 24576k Jul 15 23:57:48.938920 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 23:57:48.938928 kernel: Run /init as init process Jul 15 23:57:48.938936 kernel: with arguments: Jul 15 23:57:48.938943 kernel: /init Jul 15 23:57:48.938951 kernel: with environment: Jul 15 23:57:48.938959 kernel: HOME=/ Jul 15 23:57:48.938966 kernel: TERM=linux Jul 15 23:57:48.938974 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:57:48.938983 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:57:48.938996 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:57:48.939017 systemd[1]: Detected virtualization kvm. Jul 15 23:57:48.939026 systemd[1]: Detected architecture x86-64. Jul 15 23:57:48.939034 systemd[1]: Running in initrd. Jul 15 23:57:48.939042 systemd[1]: No hostname configured, using default hostname. Jul 15 23:57:48.939053 systemd[1]: Hostname set to . Jul 15 23:57:48.939062 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:57:48.939070 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:57:48.939078 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:57:48.939087 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:57:48.939096 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:57:48.939105 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:57:48.939114 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:57:48.939125 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:57:48.939135 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:57:48.939144 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:57:48.939153 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:57:48.939161 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:57:48.939170 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:57:48.939178 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:57:48.939189 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:57:48.939198 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:57:48.939206 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:57:48.939215 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:57:48.939245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:57:48.939254 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:57:48.939262 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:57:48.939271 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:57:48.939282 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:57:48.939291 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:57:48.939299 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:57:48.939308 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:57:48.939316 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:57:48.939326 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:57:48.939338 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:57:48.939347 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:57:48.939355 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:57:48.939364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:57:48.939373 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:57:48.939384 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:57:48.939392 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:57:48.939401 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:57:48.939429 systemd-journald[220]: Collecting audit messages is disabled. Jul 15 23:57:48.939451 systemd-journald[220]: Journal started Jul 15 23:57:48.939471 systemd-journald[220]: Runtime Journal (/run/log/journal/858e49c37bee4d7ca525591263b85ab5) is 6M, max 48.6M, 42.5M free. Jul 15 23:57:48.929915 systemd-modules-load[221]: Inserted module 'overlay' Jul 15 23:57:48.941517 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:57:48.942167 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:57:48.959419 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:57:48.986510 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:57:48.986551 kernel: Bridge firewalling registered Jul 15 23:57:48.963386 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 15 23:57:48.966776 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:57:48.984951 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:57:48.987927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:57:48.989835 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:57:48.996497 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:57:48.999141 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:57:49.000170 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:57:49.022052 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:57:49.024708 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:57:49.041489 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:57:49.060379 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:57:49.061637 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:57:49.080407 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:57:49.099966 systemd-resolved[256]: Positive Trust Anchors: Jul 15 23:57:49.099993 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:57:49.100034 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:57:49.102928 systemd-resolved[256]: Defaulting to hostname 'linux'. Jul 15 23:57:49.104158 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:57:49.158747 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:57:49.260609 kernel: SCSI subsystem initialized Jul 15 23:57:49.270263 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:57:49.282248 kernel: iscsi: registered transport (tcp) Jul 15 23:57:49.306684 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:57:49.306762 kernel: QLogic iSCSI HBA Driver Jul 15 23:57:49.329507 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:57:49.359539 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:57:49.364172 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:57:49.429866 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:57:49.434015 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:57:49.506276 kernel: raid6: avx2x4 gen() 28497 MB/s Jul 15 23:57:49.523260 kernel: raid6: avx2x2 gen() 30005 MB/s Jul 15 23:57:49.540360 kernel: raid6: avx2x1 gen() 24459 MB/s Jul 15 23:57:49.540390 kernel: raid6: using algorithm avx2x2 gen() 30005 MB/s Jul 15 23:57:49.558316 kernel: raid6: .... xor() 19436 MB/s, rmw enabled Jul 15 23:57:49.558338 kernel: raid6: using avx2x2 recovery algorithm Jul 15 23:57:49.598322 kernel: xor: automatically using best checksumming function avx Jul 15 23:57:49.776280 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:57:49.785496 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:57:49.787512 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:57:49.891763 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 15 23:57:49.897530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:57:49.900519 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:57:49.936115 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Jul 15 23:57:49.970460 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:57:49.973432 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:57:50.069885 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:57:50.073823 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:57:50.117254 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 23:57:50.124242 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 15 23:57:50.134376 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 23:57:50.146271 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 15 23:57:50.148257 kernel: AES CTR mode by8 optimization enabled Jul 15 23:57:50.148280 kernel: libata version 3.00 loaded. Jul 15 23:57:50.148291 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:57:50.149387 kernel: GPT:9289727 != 19775487 Jul 15 23:57:50.149413 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:57:50.150535 kernel: GPT:9289727 != 19775487 Jul 15 23:57:50.150563 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:57:50.151680 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:57:50.166359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:57:50.166488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:57:50.173608 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 23:57:50.173834 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 23:57:50.173849 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 15 23:57:50.171187 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:57:50.180572 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 15 23:57:50.180785 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 23:57:50.181125 kernel: scsi host0: ahci Jul 15 23:57:50.174266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:57:50.183057 kernel: scsi host1: ahci Jul 15 23:57:50.180189 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:57:50.185316 kernel: scsi host2: ahci Jul 15 23:57:50.233261 kernel: scsi host3: ahci Jul 15 23:57:50.234255 kernel: scsi host4: ahci Jul 15 23:57:50.240300 kernel: scsi host5: ahci Jul 15 23:57:50.240586 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 15 23:57:50.240612 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 15 23:57:50.240623 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 15 23:57:50.240633 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 15 23:57:50.240650 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 15 23:57:50.240661 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 15 23:57:50.260629 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 23:57:50.270507 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 23:57:50.306838 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 23:57:50.309714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:57:50.330380 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 23:57:50.347346 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:57:50.350626 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:57:50.412250 disk-uuid[634]: Primary Header is updated. Jul 15 23:57:50.412250 disk-uuid[634]: Secondary Entries is updated. Jul 15 23:57:50.412250 disk-uuid[634]: Secondary Header is updated. Jul 15 23:57:50.416625 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:57:50.423286 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:57:50.546515 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 23:57:50.547381 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 15 23:57:50.547433 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 23:57:50.547448 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 15 23:57:50.548265 kernel: ata3.00: applying bridge limits Jul 15 23:57:50.549256 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 23:57:50.549291 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 23:57:50.550264 kernel: ata3.00: configured for UDMA/100 Jul 15 23:57:50.558357 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 23:57:50.558429 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 15 23:57:50.623301 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 15 23:57:50.623712 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 23:57:50.645531 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 23:57:51.004167 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:57:51.036363 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:57:51.038164 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:57:51.040627 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:57:51.043939 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:57:51.073511 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:57:51.449262 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:57:51.449703 disk-uuid[635]: The operation has completed successfully. Jul 15 23:57:51.477469 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:57:51.477612 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:57:51.569338 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:57:51.610584 sh[663]: Success Jul 15 23:57:51.663819 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:57:51.663889 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:57:51.665118 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:57:51.676246 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 15 23:57:51.720017 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:57:51.728525 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:57:51.742477 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:57:51.749482 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:57:51.749512 kernel: BTRFS: device fsid 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (675) Jul 15 23:57:51.750773 kernel: BTRFS info (device dm-0): first mount of filesystem 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e Jul 15 23:57:51.750797 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:57:51.752248 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:57:51.756750 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:57:51.759150 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:57:51.761402 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:57:51.764141 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:57:51.767084 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:57:51.798268 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (707) Jul 15 23:57:51.798332 kernel: BTRFS info (device vda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:57:51.799631 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:57:51.800325 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:57:51.808256 kernel: BTRFS info (device vda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:57:51.809465 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:57:51.813184 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:57:51.904431 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:57:51.910456 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:57:52.020058 systemd-networkd[845]: lo: Link UP Jul 15 23:57:52.020069 systemd-networkd[845]: lo: Gained carrier Jul 15 23:57:52.021768 systemd-networkd[845]: Enumeration completed Jul 15 23:57:52.022166 systemd-networkd[845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:57:52.022170 systemd-networkd[845]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:57:52.025601 ignition[754]: Ignition 2.21.0 Jul 15 23:57:52.024368 systemd-networkd[845]: eth0: Link UP Jul 15 23:57:52.025611 ignition[754]: Stage: fetch-offline Jul 15 23:57:52.024372 systemd-networkd[845]: eth0: Gained carrier Jul 15 23:57:52.025671 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:57:52.024381 systemd-networkd[845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:57:52.025685 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:57:52.027855 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:57:52.025819 ignition[754]: parsed url from cmdline: "" Jul 15 23:57:52.025825 ignition[754]: no config URL provided Jul 15 23:57:52.025832 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:57:52.025845 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:57:52.025876 ignition[754]: op(1): [started] loading QEMU firmware config module Jul 15 23:57:52.025883 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 23:57:52.139128 ignition[754]: op(1): [finished] loading QEMU firmware config module Jul 15 23:57:52.139338 systemd[1]: Reached target network.target - Network. Jul 15 23:57:52.151335 systemd-networkd[845]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:57:52.180750 ignition[754]: parsing config with SHA512: 20907d93f7b0b9bf3451280f8eec93ab25b499a022e2e4b18752b8783c131deb95854b0c66b4c2fb76ad1ebcba084ab4134e148cf2058e37849e8b7c33e6c0f7 Jul 15 23:57:52.189994 unknown[754]: fetched base config from "system" Jul 15 23:57:52.190012 unknown[754]: fetched user config from "qemu" Jul 15 23:57:52.190614 ignition[754]: fetch-offline: fetch-offline passed Jul 15 23:57:52.190702 ignition[754]: Ignition finished successfully Jul 15 23:57:52.330823 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:57:52.331806 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 23:57:52.333614 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:57:52.385662 ignition[858]: Ignition 2.21.0 Jul 15 23:57:52.385679 ignition[858]: Stage: kargs Jul 15 23:57:52.385874 ignition[858]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:57:52.385887 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:57:52.390803 ignition[858]: kargs: kargs passed Jul 15 23:57:52.390912 ignition[858]: Ignition finished successfully Jul 15 23:57:52.434677 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:57:52.438694 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:57:52.496283 ignition[866]: Ignition 2.21.0 Jul 15 23:57:52.496773 ignition[866]: Stage: disks Jul 15 23:57:52.499004 ignition[866]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:57:52.499038 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:57:52.500445 ignition[866]: disks: disks passed Jul 15 23:57:52.500518 ignition[866]: Ignition finished successfully Jul 15 23:57:52.505371 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:57:52.507640 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:57:52.507742 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:57:52.511181 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:57:52.513366 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:57:52.515257 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:57:52.518118 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:57:52.544434 systemd-fsck[876]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 23:57:53.083450 systemd-networkd[845]: eth0: Gained IPv6LL Jul 15 23:57:53.108339 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:57:53.111125 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:57:53.253251 kernel: EXT4-fs (vda9): mounted filesystem e7011b63-42ae-44ea-90bf-c826e39292b2 r/w with ordered data mode. Quota mode: none. Jul 15 23:57:53.253849 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:57:53.255515 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:57:53.257987 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:57:53.260085 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:57:53.262010 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:57:53.262075 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:57:53.262109 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:57:53.286247 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:57:53.288708 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:57:53.294880 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (884) Jul 15 23:57:53.294940 kernel: BTRFS info (device vda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:57:53.294959 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:57:53.295731 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:57:53.299719 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:57:53.330755 initrd-setup-root[908]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:57:53.334893 initrd-setup-root[915]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:57:53.338806 initrd-setup-root[922]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:57:53.343936 initrd-setup-root[929]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:57:53.442126 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:57:53.494424 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:57:53.496234 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:57:53.515368 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:57:53.561871 kernel: BTRFS info (device vda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:57:53.576423 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:57:53.592322 ignition[999]: INFO : Ignition 2.21.0 Jul 15 23:57:53.592322 ignition[999]: INFO : Stage: mount Jul 15 23:57:53.594397 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:57:53.594397 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:57:53.596574 ignition[999]: INFO : mount: mount passed Jul 15 23:57:53.596574 ignition[999]: INFO : Ignition finished successfully Jul 15 23:57:53.597582 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:57:53.600243 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:57:54.255758 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:57:54.282269 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Jul 15 23:57:54.282302 kernel: BTRFS info (device vda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:57:54.284767 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:57:54.284793 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:57:54.288940 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:57:54.360306 ignition[1029]: INFO : Ignition 2.21.0 Jul 15 23:57:54.360306 ignition[1029]: INFO : Stage: files Jul 15 23:57:54.362697 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:57:54.362697 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:57:54.366512 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:57:54.368772 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:57:54.368772 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:57:54.385767 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:57:54.387427 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:57:54.389103 unknown[1029]: wrote ssh authorized keys file for user: core Jul 15 23:57:54.390316 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:57:54.392041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 23:57:54.394019 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 15 23:57:54.434811 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:57:54.586618 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 23:57:54.586618 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:57:54.591209 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:57:54.591209 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:57:54.591209 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:57:54.591209 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:57:54.591209 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:57:54.591209 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:57:54.591209 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:57:54.662978 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:57:54.698261 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:57:54.698261 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 23:57:54.921453 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 23:57:54.921453 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 23:57:54.950771 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 15 23:57:55.281944 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 23:57:55.708578 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 23:57:55.708578 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 23:57:55.712645 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:57:56.890612 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:57:56.890612 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 23:57:56.890612 ignition[1029]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 15 23:57:56.890612 ignition[1029]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:57:56.898848 ignition[1029]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:57:56.898848 ignition[1029]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 15 23:57:56.898848 ignition[1029]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 23:57:56.914510 ignition[1029]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:57:56.921766 ignition[1029]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:57:56.923746 ignition[1029]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 23:57:56.923746 ignition[1029]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:57:56.923746 ignition[1029]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:57:56.923746 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:57:56.923746 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:57:56.923746 ignition[1029]: INFO : files: files passed Jul 15 23:57:56.923746 ignition[1029]: INFO : Ignition finished successfully Jul 15 23:57:56.933853 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:57:56.957664 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:57:56.960353 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:57:56.987361 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:57:56.987532 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:57:56.991814 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 23:57:56.996405 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:57:56.996405 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:57:57.005461 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:57:56.998750 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:57:57.003015 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:57:57.008089 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:57:57.073004 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:57:57.073147 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:57:57.074335 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:57:57.074623 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:57:57.074990 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:57:57.076078 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:57:57.098210 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:57:57.100883 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:57:57.124317 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:57:57.124511 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:57:57.127713 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:57:57.128805 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:57:57.128955 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:57:57.133403 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:57:57.134542 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:57:57.136404 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:57:57.137315 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:57:57.137768 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:57:57.138087 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:57:57.138590 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:57:57.138915 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:57:57.139278 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:57:57.175637 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:57:57.177585 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:57:57.179308 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:57:57.179475 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:57:57.182997 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:57:57.184180 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:57:57.185190 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:57:57.188177 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:57:57.188340 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:57:57.188482 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:57:57.192449 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:57:57.192592 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:57:57.193606 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:57:57.196318 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:57:57.201361 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:57:57.204114 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:57:57.205126 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:57:57.206056 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:57:57.206162 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:57:57.207728 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:57:57.207817 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:57:57.209410 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:57:57.209539 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:57:57.211148 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:57:57.211275 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:57:57.215926 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:57:57.217880 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:57:57.218000 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:57:57.220784 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:57:57.221755 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:57:57.221877 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:57:57.222572 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:57:57.222671 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:57:57.227544 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:57:57.243403 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:57:57.262592 ignition[1084]: INFO : Ignition 2.21.0 Jul 15 23:57:57.262592 ignition[1084]: INFO : Stage: umount Jul 15 23:57:57.264853 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:57:57.264853 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:57:57.267658 ignition[1084]: INFO : umount: umount passed Jul 15 23:57:57.267658 ignition[1084]: INFO : Ignition finished successfully Jul 15 23:57:57.268173 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:57:57.273856 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:57:57.274036 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:57:57.275423 systemd[1]: Stopped target network.target - Network. Jul 15 23:57:57.278352 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:57:57.278429 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:57:57.280511 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:57:57.280564 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:57:57.282838 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:57:57.282892 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:57:57.285175 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:57:57.285262 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:57:57.287632 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:57:57.288711 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:57:57.289350 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:57:57.289550 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:57:57.293972 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:57:57.294096 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:57:57.300777 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:57:57.301898 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:57:57.301975 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:57:57.302834 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:57:57.302892 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:57:57.308820 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:57:57.311437 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:57:57.311699 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:57:57.315815 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:57:57.316071 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:57:57.317471 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:57:57.317521 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:57:57.322423 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:57:57.324707 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:57:57.324799 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:57:57.327493 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:57:57.327559 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:57:57.331045 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:57:57.331109 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:57:57.332969 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:57:57.334819 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:57:57.350508 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:57:57.350774 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:57:57.352249 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:57:57.352324 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:57:57.358543 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:57:57.358608 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:57:57.360926 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:57:57.360984 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:57:57.365466 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:57:57.365539 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:57:57.368824 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:57:57.368887 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:57:57.373627 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:57:57.373701 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:57:57.373754 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:57:57.378832 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:57:57.378880 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:57:57.384640 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:57:57.384712 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:57:57.387092 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:57:57.387465 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:57:57.395696 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:57:57.395824 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:57:57.435880 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:57:57.437887 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:57:57.462043 systemd[1]: Switching root. Jul 15 23:57:57.508133 systemd-journald[220]: Journal stopped Jul 15 23:57:58.771272 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 15 23:57:58.771357 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:57:58.771380 kernel: SELinux: policy capability open_perms=1 Jul 15 23:57:58.771392 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:57:58.771404 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:57:58.771415 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:57:58.771427 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:57:58.771438 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:57:58.771455 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:57:58.771466 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:57:58.771478 kernel: audit: type=1403 audit(1752623877.885:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:57:58.771493 systemd[1]: Successfully loaded SELinux policy in 52.484ms. Jul 15 23:57:58.771521 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.847ms. Jul 15 23:57:58.771546 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:57:58.771573 systemd[1]: Detected virtualization kvm. Jul 15 23:57:58.771600 systemd[1]: Detected architecture x86-64. Jul 15 23:57:58.771624 systemd[1]: Detected first boot. Jul 15 23:57:58.771651 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:57:58.771679 zram_generator::config[1130]: No configuration found. Jul 15 23:57:58.771714 kernel: Guest personality initialized and is inactive Jul 15 23:57:58.771751 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 23:57:58.771777 kernel: Initialized host personality Jul 15 23:57:58.771824 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:57:58.771841 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:57:58.771858 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:57:58.771873 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:57:58.771889 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:57:58.771904 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:57:58.771922 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:57:58.771937 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:57:58.771950 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:57:58.771965 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:57:58.771978 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:57:58.771990 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:57:58.772003 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:57:58.772015 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:57:58.772028 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:57:58.772043 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:57:58.772055 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:57:58.772069 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:57:58.772082 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:57:58.772094 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:57:58.772107 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 23:57:58.772119 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:57:58.772134 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:57:58.772146 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:57:58.772158 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:57:58.772171 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:57:58.772183 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:57:58.772195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:57:58.772213 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:57:58.772248 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:57:58.772261 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:57:58.772273 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:57:58.772295 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:57:58.772308 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:57:58.772320 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:57:58.772332 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:57:58.772352 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:57:58.772365 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:57:58.772379 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:57:58.772391 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:57:58.772404 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:57:58.772419 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:57:58.772432 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:57:58.772444 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:57:58.772456 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:57:58.772469 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:57:58.772481 systemd[1]: Reached target machines.target - Containers. Jul 15 23:57:58.772494 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:57:58.772506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:57:58.772521 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:57:58.772534 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:57:58.772546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:57:58.772559 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:57:58.772571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:57:58.772583 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:57:58.772595 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:57:58.772608 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:57:58.772622 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:57:58.772635 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:57:58.772650 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:57:58.772662 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:57:58.772675 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:57:58.772687 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:57:58.772699 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:57:58.772711 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:57:58.772724 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:57:58.772740 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:57:58.772753 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:57:58.772765 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:57:58.772777 systemd[1]: Stopped verity-setup.service. Jul 15 23:57:58.772790 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:57:58.772828 systemd-journald[1194]: Collecting audit messages is disabled. Jul 15 23:57:58.772854 kernel: loop: module loaded Jul 15 23:57:58.772866 kernel: fuse: init (API version 7.41) Jul 15 23:57:58.772878 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:57:58.772891 systemd-journald[1194]: Journal started Jul 15 23:57:58.772916 systemd-journald[1194]: Runtime Journal (/run/log/journal/858e49c37bee4d7ca525591263b85ab5) is 6M, max 48.6M, 42.5M free. Jul 15 23:57:58.790460 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:57:58.790528 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:57:58.790552 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:57:58.790568 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:57:58.790584 kernel: ACPI: bus type drm_connector registered Jul 15 23:57:58.790603 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:57:58.483665 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:57:58.503766 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 23:57:58.504340 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:57:58.793257 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:57:58.807270 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:57:58.811450 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:57:58.811753 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:57:58.813452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:57:58.814569 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:57:58.816021 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:57:58.816352 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:57:58.817955 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:57:58.818452 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:57:58.820078 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:57:58.820502 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:57:58.822447 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:57:58.822741 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:57:58.824320 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:57:58.825971 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:57:58.827765 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:57:58.829430 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:57:58.843451 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:57:58.846220 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:57:58.848881 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:57:58.850142 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:57:58.850183 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:57:58.852511 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:57:58.860516 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:57:58.870065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:57:58.894805 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:57:58.899348 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:57:58.900681 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:57:58.909882 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:57:58.913305 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:57:58.914687 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:57:58.919063 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:57:58.922158 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:57:58.927404 systemd-journald[1194]: Time spent on flushing to /var/log/journal/858e49c37bee4d7ca525591263b85ab5 is 20.209ms for 973 entries. Jul 15 23:57:58.927404 systemd-journald[1194]: System Journal (/var/log/journal/858e49c37bee4d7ca525591263b85ab5) is 8M, max 195.6M, 187.6M free. Jul 15 23:57:59.272620 systemd-journald[1194]: Received client request to flush runtime journal. Jul 15 23:57:59.272679 kernel: loop0: detected capacity change from 0 to 224512 Jul 15 23:57:59.272776 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:57:59.272801 kernel: loop1: detected capacity change from 0 to 113872 Jul 15 23:57:59.272825 kernel: loop2: detected capacity change from 0 to 146240 Jul 15 23:57:59.272843 kernel: loop3: detected capacity change from 0 to 224512 Jul 15 23:57:59.272864 kernel: loop4: detected capacity change from 0 to 113872 Jul 15 23:57:59.272885 kernel: loop5: detected capacity change from 0 to 146240 Jul 15 23:57:58.925731 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:57:58.927355 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:57:58.987280 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:57:59.101177 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:57:59.103675 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:57:59.107760 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:57:59.260647 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:57:59.265406 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:57:59.270374 (sd-merge)[1257]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 23:57:59.271012 (sd-merge)[1257]: Merged extensions into '/usr'. Jul 15 23:57:59.276545 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:57:59.282200 systemd[1]: Reload requested from client PID 1234 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:57:59.282468 systemd[1]: Reloading... Jul 15 23:57:59.344264 zram_generator::config[1294]: No configuration found. Jul 15 23:57:59.649150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:57:59.706895 ldconfig[1229]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:57:59.733273 systemd[1]: Reloading finished in 450 ms. Jul 15 23:57:59.751832 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:57:59.775003 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:57:59.776712 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:57:59.778601 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:57:59.780624 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:57:59.791173 systemd[1]: Starting ensure-sysext.service... Jul 15 23:57:59.794545 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:57:59.797462 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:57:59.814644 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:57:59.814823 systemd[1]: Reloading... Jul 15 23:57:59.824882 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:57:59.825681 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:57:59.826215 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:57:59.826701 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:57:59.827946 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:57:59.828037 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 15 23:57:59.828060 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 15 23:57:59.828306 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 15 23:57:59.828387 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 15 23:57:59.832887 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:57:59.833004 systemd-tmpfiles[1336]: Skipping /boot Jul 15 23:57:59.846791 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:57:59.846809 systemd-tmpfiles[1336]: Skipping /boot Jul 15 23:57:59.892274 zram_generator::config[1365]: No configuration found. Jul 15 23:58:00.008208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:58:00.113384 systemd[1]: Reloading finished in 298 ms. Jul 15 23:58:00.152529 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:58:00.154197 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:58:00.155912 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:58:00.166794 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:58:00.169373 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:58:00.171846 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:58:00.177365 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:58:00.181095 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:58:00.184694 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:58:00.191616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:58:00.191803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:58:00.194761 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:58:00.197665 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:58:00.206551 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:58:00.207942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:58:00.208066 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:58:00.213076 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:58:00.214403 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:58:00.215915 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:58:00.216255 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:58:00.218184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:58:00.218426 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:58:00.230786 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:58:00.232958 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:58:00.233291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:58:00.238295 systemd-udevd[1409]: Using default interface naming scheme 'v255'. Jul 15 23:58:00.257166 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:58:00.267116 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:58:00.267465 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:58:00.271471 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:58:00.274428 augenrules[1440]: No rules Jul 15 23:58:00.276022 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:58:00.283709 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:58:00.287424 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:58:00.336252 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:58:00.336466 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:58:00.340564 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:58:00.341663 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:58:00.343207 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:58:00.354358 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:58:00.356214 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:58:00.356552 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:58:00.358203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:58:00.358501 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:58:00.360405 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:58:00.362558 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:58:00.362798 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:58:00.364605 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:58:00.364850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:58:00.367057 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:58:00.367317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:58:00.374775 systemd[1]: Finished ensure-sysext.service. Jul 15 23:58:00.395402 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:58:00.396613 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:58:00.396693 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:58:00.400626 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 23:58:00.402333 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:58:00.406024 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:58:00.435578 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 23:58:00.523528 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 15 23:58:00.523621 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 23:58:00.532501 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:58:00.592125 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:58:00.599840 kernel: ACPI: button: Power Button [PWRF] Jul 15 23:58:00.633700 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 23:58:00.634060 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 23:58:00.633907 systemd-networkd[1489]: lo: Link UP Jul 15 23:58:00.633912 systemd-networkd[1489]: lo: Gained carrier Jul 15 23:58:00.635760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:58:00.636711 systemd-networkd[1489]: Enumeration completed Jul 15 23:58:00.637170 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:58:00.637181 systemd-networkd[1489]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:58:00.638450 systemd-networkd[1489]: eth0: Link UP Jul 15 23:58:00.638610 systemd-networkd[1489]: eth0: Gained carrier Jul 15 23:58:00.638632 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:58:00.640498 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:58:00.643319 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:58:00.647417 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:58:00.667413 systemd-networkd[1489]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:58:00.673498 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:58:00.688267 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:58:00.758402 systemd-resolved[1408]: Positive Trust Anchors: Jul 15 23:58:00.758746 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:58:00.758822 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:58:00.768455 systemd-resolved[1408]: Defaulting to hostname 'linux'. Jul 15 23:58:00.768672 kernel: kvm_amd: TSC scaling supported Jul 15 23:58:00.768701 kernel: kvm_amd: Nested Virtualization enabled Jul 15 23:58:00.768724 kernel: kvm_amd: Nested Paging enabled Jul 15 23:58:00.769917 kernel: kvm_amd: LBR virtualization supported Jul 15 23:58:00.771197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:58:00.771399 systemd[1]: Reached target network.target - Network. Jul 15 23:58:00.771654 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:58:00.776306 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 23:58:00.776474 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:58:01.227262 systemd-timesyncd[1490]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 23:58:01.227291 systemd-resolved[1408]: Clock change detected. Flushing caches. Jul 15 23:58:01.227630 systemd-timesyncd[1490]: Initial clock synchronization to Tue 2025-07-15 23:58:01.227113 UTC. Jul 15 23:58:01.228461 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 15 23:58:01.228487 kernel: kvm_amd: Virtual GIF supported Jul 15 23:58:01.276465 kernel: EDAC MC: Ver: 3.0.0 Jul 15 23:58:01.280190 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:58:01.281822 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:58:01.283085 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:58:01.284399 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:58:01.285641 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 23:58:01.286926 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:58:01.288108 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:58:01.289346 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:58:01.290597 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:58:01.290625 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:58:01.291519 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:58:01.293519 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:58:01.296236 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:58:01.319035 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:58:01.320571 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:58:01.321858 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:58:01.326874 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:58:01.328421 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:58:01.330261 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:58:01.332307 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:58:01.333494 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:58:01.334707 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:58:01.334751 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:58:01.335924 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:58:01.338134 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:58:01.340555 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:58:01.342888 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:58:01.345622 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:58:01.346842 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:58:01.348559 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 23:58:01.352522 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:58:01.354417 jq[1537]: false Jul 15 23:58:01.354659 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:58:01.356934 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:58:01.359255 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:58:01.360285 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing passwd entry cache Jul 15 23:58:01.360575 oslogin_cache_refresh[1539]: Refreshing passwd entry cache Jul 15 23:58:01.365938 extend-filesystems[1538]: Found /dev/vda6 Jul 15 23:58:01.372814 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:58:01.374982 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 23:58:01.375212 extend-filesystems[1538]: Found /dev/vda9 Jul 15 23:58:01.375667 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:58:01.376846 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:58:01.380402 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting users, quitting Jul 15 23:58:01.380402 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:58:01.380402 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing group entry cache Jul 15 23:58:01.378704 oslogin_cache_refresh[1539]: Failure getting users, quitting Jul 15 23:58:01.378729 oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:58:01.378783 oslogin_cache_refresh[1539]: Refreshing group entry cache Jul 15 23:58:01.380621 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:58:01.384851 extend-filesystems[1538]: Checking size of /dev/vda9 Jul 15 23:58:01.407662 oslogin_cache_refresh[1539]: Failure getting groups, quitting Jul 15 23:58:01.409105 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting groups, quitting Jul 15 23:58:01.409105 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:58:01.407677 oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:58:01.409691 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:58:01.412055 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:58:01.412341 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:58:01.412706 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 23:58:01.412960 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 23:58:01.414783 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:58:01.415044 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:58:01.415124 jq[1556]: true Jul 15 23:58:01.417792 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:58:01.418101 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:58:01.425596 update_engine[1555]: I20250715 23:58:01.425299 1555 main.cc:92] Flatcar Update Engine starting Jul 15 23:58:01.435926 (ntainerd)[1565]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:58:01.443279 jq[1564]: true Jul 15 23:58:01.445049 tar[1562]: linux-amd64/LICENSE Jul 15 23:58:01.445049 tar[1562]: linux-amd64/helm Jul 15 23:58:01.464511 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:58:01.464113 dbus-daemon[1535]: [system] SELinux support is enabled Jul 15 23:58:01.469334 extend-filesystems[1538]: Resized partition /dev/vda9 Jul 15 23:58:01.469180 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:58:01.476456 update_engine[1555]: I20250715 23:58:01.470883 1555 update_check_scheduler.cc:74] Next update check in 4m17s Jul 15 23:58:01.469202 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:58:01.474943 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:58:01.474961 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:58:01.483458 extend-filesystems[1580]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:58:01.486886 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:58:01.492890 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:58:01.540506 systemd-logind[1548]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 23:58:01.541291 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 23:58:01.543609 systemd-logind[1548]: New seat seat0. Jul 15 23:58:01.549040 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:58:01.585582 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 23:58:01.588747 locksmithd[1586]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:58:01.640159 sshd_keygen[1561]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:58:01.648413 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 23:58:01.668837 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:58:01.672457 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:58:01.675567 extend-filesystems[1580]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 23:58:01.675567 extend-filesystems[1580]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 23:58:01.675567 extend-filesystems[1580]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 23:58:01.679871 extend-filesystems[1538]: Resized filesystem in /dev/vda9 Jul 15 23:58:01.681060 bash[1597]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:58:01.677096 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:58:01.677470 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:58:01.687676 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:58:01.693476 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 23:58:01.696157 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:58:01.697061 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:58:01.700458 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:58:01.723106 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:58:01.726308 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:58:01.730675 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 23:58:01.731983 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:58:01.735646 containerd[1565]: time="2025-07-15T23:58:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:58:01.736949 containerd[1565]: time="2025-07-15T23:58:01.736520754Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:58:01.745787 containerd[1565]: time="2025-07-15T23:58:01.745712575Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.909µs" Jul 15 23:58:01.745787 containerd[1565]: time="2025-07-15T23:58:01.745763451Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:58:01.745787 containerd[1565]: time="2025-07-15T23:58:01.745787776Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:58:01.746076 containerd[1565]: time="2025-07-15T23:58:01.746038156Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:58:01.746076 containerd[1565]: time="2025-07-15T23:58:01.746065216Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:58:01.746147 containerd[1565]: time="2025-07-15T23:58:01.746095984Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:58:01.746197 containerd[1565]: time="2025-07-15T23:58:01.746171696Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:58:01.746197 containerd[1565]: time="2025-07-15T23:58:01.746189880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:58:01.746619 containerd[1565]: time="2025-07-15T23:58:01.746566416Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:58:01.746619 containerd[1565]: time="2025-07-15T23:58:01.746592134Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:58:01.746619 containerd[1565]: time="2025-07-15T23:58:01.746606491Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:58:01.746619 containerd[1565]: time="2025-07-15T23:58:01.746617862Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:58:01.746768 containerd[1565]: time="2025-07-15T23:58:01.746736435Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:58:01.747120 containerd[1565]: time="2025-07-15T23:58:01.747075099Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:58:01.747160 containerd[1565]: time="2025-07-15T23:58:01.747122608Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:58:01.747160 containerd[1565]: time="2025-07-15T23:58:01.747137416Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:58:01.747204 containerd[1565]: time="2025-07-15T23:58:01.747173253Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:58:01.747567 containerd[1565]: time="2025-07-15T23:58:01.747515154Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:58:01.747699 containerd[1565]: time="2025-07-15T23:58:01.747664023Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:58:01.755080 containerd[1565]: time="2025-07-15T23:58:01.755017909Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:58:01.755120 containerd[1565]: time="2025-07-15T23:58:01.755088452Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:58:01.755120 containerd[1565]: time="2025-07-15T23:58:01.755111014Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:58:01.755157 containerd[1565]: time="2025-07-15T23:58:01.755128797Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:58:01.755157 containerd[1565]: time="2025-07-15T23:58:01.755145228Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:58:01.755210 containerd[1565]: time="2025-07-15T23:58:01.755158683Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:58:01.755210 containerd[1565]: time="2025-07-15T23:58:01.755174583Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:58:01.755248 containerd[1565]: time="2025-07-15T23:58:01.755188680Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:58:01.755248 containerd[1565]: time="2025-07-15T23:58:01.755225198Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:58:01.755248 containerd[1565]: time="2025-07-15T23:58:01.755238443Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:58:01.755301 containerd[1565]: time="2025-07-15T23:58:01.755272396Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:58:01.755301 containerd[1565]: time="2025-07-15T23:58:01.755291011Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:58:01.755557 containerd[1565]: time="2025-07-15T23:58:01.755519009Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:58:01.755588 containerd[1565]: time="2025-07-15T23:58:01.755560146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:58:01.755588 containerd[1565]: time="2025-07-15T23:58:01.755580314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:58:01.755624 containerd[1565]: time="2025-07-15T23:58:01.755593548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:58:01.755624 containerd[1565]: time="2025-07-15T23:58:01.755609579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:58:01.755661 containerd[1565]: time="2025-07-15T23:58:01.755623344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:58:01.755661 containerd[1565]: time="2025-07-15T23:58:01.755638573Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:58:01.755661 containerd[1565]: time="2025-07-15T23:58:01.755654773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:58:01.755731 containerd[1565]: time="2025-07-15T23:58:01.755668619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:58:01.755731 containerd[1565]: time="2025-07-15T23:58:01.755684559Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:58:01.755731 containerd[1565]: time="2025-07-15T23:58:01.755698555Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:58:01.755807 containerd[1565]: time="2025-07-15T23:58:01.755778756Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:58:01.755807 containerd[1565]: time="2025-07-15T23:58:01.755805726Z" level=info msg="Start snapshots syncer" Jul 15 23:58:01.755862 containerd[1565]: time="2025-07-15T23:58:01.755840982Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:58:01.756215 containerd[1565]: time="2025-07-15T23:58:01.756148789Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:58:01.756328 containerd[1565]: time="2025-07-15T23:58:01.756228809Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:58:01.756624 containerd[1565]: time="2025-07-15T23:58:01.756546725Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:58:01.756738 containerd[1565]: time="2025-07-15T23:58:01.756697698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:58:01.756764 containerd[1565]: time="2025-07-15T23:58:01.756736331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:58:01.756764 containerd[1565]: time="2025-07-15T23:58:01.756754224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:58:01.756805 containerd[1565]: time="2025-07-15T23:58:01.756769052Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:58:01.756805 containerd[1565]: time="2025-07-15T23:58:01.756786375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:58:01.756805 containerd[1565]: time="2025-07-15T23:58:01.756800711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:58:01.756870 containerd[1565]: time="2025-07-15T23:58:01.756814978Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:58:01.756870 containerd[1565]: time="2025-07-15T23:58:01.756843341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:58:01.758096 containerd[1565]: time="2025-07-15T23:58:01.758053099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:58:01.758096 containerd[1565]: time="2025-07-15T23:58:01.758091531Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:58:01.758979 containerd[1565]: time="2025-07-15T23:58:01.758936646Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:58:01.759068 containerd[1565]: time="2025-07-15T23:58:01.759043827Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:58:01.759068 containerd[1565]: time="2025-07-15T23:58:01.759063323Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:58:01.759111 containerd[1565]: time="2025-07-15T23:58:01.759078361Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:58:01.759111 containerd[1565]: time="2025-07-15T23:58:01.759089642Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:58:01.759111 containerd[1565]: time="2025-07-15T23:58:01.759102446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:58:01.759166 containerd[1565]: time="2025-07-15T23:58:01.759116262Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:58:01.759166 containerd[1565]: time="2025-07-15T23:58:01.759141119Z" level=info msg="runtime interface created" Jul 15 23:58:01.759166 containerd[1565]: time="2025-07-15T23:58:01.759148112Z" level=info msg="created NRI interface" Jul 15 23:58:01.759166 containerd[1565]: time="2025-07-15T23:58:01.759158652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:58:01.759241 containerd[1565]: time="2025-07-15T23:58:01.759174962Z" level=info msg="Connect containerd service" Jul 15 23:58:01.759241 containerd[1565]: time="2025-07-15T23:58:01.759206331Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:58:01.760408 containerd[1565]: time="2025-07-15T23:58:01.760350997Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:58:01.867895 containerd[1565]: time="2025-07-15T23:58:01.867761674Z" level=info msg="Start subscribing containerd event" Jul 15 23:58:01.868056 containerd[1565]: time="2025-07-15T23:58:01.867989881Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:58:01.868189 containerd[1565]: time="2025-07-15T23:58:01.868048802Z" level=info msg="Start recovering state" Jul 15 23:58:01.868264 containerd[1565]: time="2025-07-15T23:58:01.868070192Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:58:01.868492 containerd[1565]: time="2025-07-15T23:58:01.868462528Z" level=info msg="Start event monitor" Jul 15 23:58:01.868492 containerd[1565]: time="2025-07-15T23:58:01.868484038Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:58:01.868492 containerd[1565]: time="2025-07-15T23:58:01.868491662Z" level=info msg="Start streaming server" Jul 15 23:58:01.868646 containerd[1565]: time="2025-07-15T23:58:01.868501751Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:58:01.868646 containerd[1565]: time="2025-07-15T23:58:01.868509546Z" level=info msg="runtime interface starting up..." Jul 15 23:58:01.868646 containerd[1565]: time="2025-07-15T23:58:01.868515928Z" level=info msg="starting plugins..." Jul 15 23:58:01.868646 containerd[1565]: time="2025-07-15T23:58:01.868529804Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:58:01.868733 containerd[1565]: time="2025-07-15T23:58:01.868709480Z" level=info msg="containerd successfully booted in 0.134886s" Jul 15 23:58:01.868861 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:58:01.966296 tar[1562]: linux-amd64/README.md Jul 15 23:58:01.989811 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:58:02.683868 systemd-networkd[1489]: eth0: Gained IPv6LL Jul 15 23:58:02.689282 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:58:02.691624 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:58:02.694592 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 23:58:02.697286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:58:02.745220 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:58:02.771443 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:58:02.778305 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 23:58:02.778638 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 23:58:02.781052 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:58:03.517840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:58:03.519458 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:58:03.521521 systemd[1]: Startup finished in 3.529s (kernel) + 9.235s (initrd) + 5.238s (userspace) = 18.004s. Jul 15 23:58:03.548722 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:58:03.595473 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:58:03.596905 systemd[1]: Started sshd@0-10.0.0.136:22-10.0.0.1:35484.service - OpenSSH per-connection server daemon (10.0.0.1:35484). Jul 15 23:58:03.662289 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 35484 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:58:03.664833 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:03.673183 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:58:03.674684 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:58:03.684171 systemd-logind[1548]: New session 1 of user core. Jul 15 23:58:03.701016 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:58:03.705613 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:58:03.728934 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:58:03.732591 systemd-logind[1548]: New session c1 of user core. Jul 15 23:58:03.893714 systemd[1684]: Queued start job for default target default.target. Jul 15 23:58:03.906110 systemd[1684]: Created slice app.slice - User Application Slice. Jul 15 23:58:03.906144 systemd[1684]: Reached target paths.target - Paths. Jul 15 23:58:03.906198 systemd[1684]: Reached target timers.target - Timers. Jul 15 23:58:03.908265 systemd[1684]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:58:03.921472 systemd[1684]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:58:03.921732 systemd[1684]: Reached target sockets.target - Sockets. Jul 15 23:58:03.921849 systemd[1684]: Reached target basic.target - Basic System. Jul 15 23:58:03.921975 systemd[1684]: Reached target default.target - Main User Target. Jul 15 23:58:03.922096 systemd[1684]: Startup finished in 179ms. Jul 15 23:58:03.922341 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:58:03.924063 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:58:03.987237 systemd[1]: Started sshd@1-10.0.0.136:22-10.0.0.1:35498.service - OpenSSH per-connection server daemon (10.0.0.1:35498). Jul 15 23:58:04.000715 kubelet[1669]: E0715 23:58:04.000664 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:58:04.004861 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:58:04.005179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:58:04.005829 systemd[1]: kubelet.service: Consumed 1.041s CPU time, 264.1M memory peak. Jul 15 23:58:04.044088 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 35498 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:58:04.045741 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:04.050320 systemd-logind[1548]: New session 2 of user core. Jul 15 23:58:04.059544 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:58:04.115023 sshd[1699]: Connection closed by 10.0.0.1 port 35498 Jul 15 23:58:04.115444 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:04.132959 systemd[1]: sshd@1-10.0.0.136:22-10.0.0.1:35498.service: Deactivated successfully. Jul 15 23:58:04.135352 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:58:04.136200 systemd-logind[1548]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:58:04.139970 systemd[1]: Started sshd@2-10.0.0.136:22-10.0.0.1:35506.service - OpenSSH per-connection server daemon (10.0.0.1:35506). Jul 15 23:58:04.141040 systemd-logind[1548]: Removed session 2. Jul 15 23:58:04.207320 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 35506 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:58:04.208898 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:04.213631 systemd-logind[1548]: New session 3 of user core. Jul 15 23:58:04.229608 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:58:04.279234 sshd[1707]: Connection closed by 10.0.0.1 port 35506 Jul 15 23:58:04.279644 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:04.292333 systemd[1]: sshd@2-10.0.0.136:22-10.0.0.1:35506.service: Deactivated successfully. Jul 15 23:58:04.294279 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:58:04.295057 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:58:04.299289 systemd[1]: Started sshd@3-10.0.0.136:22-10.0.0.1:35508.service - OpenSSH per-connection server daemon (10.0.0.1:35508). Jul 15 23:58:04.300037 systemd-logind[1548]: Removed session 3. Jul 15 23:58:04.359913 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 35508 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:58:04.361902 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:04.367326 systemd-logind[1548]: New session 4 of user core. Jul 15 23:58:04.374693 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:58:04.432117 sshd[1715]: Connection closed by 10.0.0.1 port 35508 Jul 15 23:58:04.432489 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:04.448279 systemd[1]: sshd@3-10.0.0.136:22-10.0.0.1:35508.service: Deactivated successfully. Jul 15 23:58:04.450762 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:58:04.451722 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:58:04.455730 systemd[1]: Started sshd@4-10.0.0.136:22-10.0.0.1:35512.service - OpenSSH per-connection server daemon (10.0.0.1:35512). Jul 15 23:58:04.456313 systemd-logind[1548]: Removed session 4. Jul 15 23:58:04.527246 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 35512 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:58:04.528773 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:04.533667 systemd-logind[1548]: New session 5 of user core. Jul 15 23:58:04.544547 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:58:04.603689 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:58:04.604008 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:58:04.619630 sudo[1724]: pam_unix(sudo:session): session closed for user root Jul 15 23:58:04.621293 sshd[1723]: Connection closed by 10.0.0.1 port 35512 Jul 15 23:58:04.621728 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:04.634791 systemd[1]: sshd@4-10.0.0.136:22-10.0.0.1:35512.service: Deactivated successfully. Jul 15 23:58:04.637146 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:58:04.638164 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:58:04.641903 systemd[1]: Started sshd@5-10.0.0.136:22-10.0.0.1:35526.service - OpenSSH per-connection server daemon (10.0.0.1:35526). Jul 15 23:58:04.642726 systemd-logind[1548]: Removed session 5. Jul 15 23:58:04.699495 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 35526 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:58:04.701085 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:04.705901 systemd-logind[1548]: New session 6 of user core. Jul 15 23:58:04.716622 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:58:04.773086 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:58:04.773507 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:58:04.781792 sudo[1734]: pam_unix(sudo:session): session closed for user root Jul 15 23:58:04.789205 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:58:04.789568 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:58:04.801041 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:58:04.855205 augenrules[1756]: No rules Jul 15 23:58:04.857356 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:58:04.857698 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:58:04.858970 sudo[1733]: pam_unix(sudo:session): session closed for user root Jul 15 23:58:04.860434 sshd[1732]: Connection closed by 10.0.0.1 port 35526 Jul 15 23:58:04.860752 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:04.873455 systemd[1]: sshd@5-10.0.0.136:22-10.0.0.1:35526.service: Deactivated successfully. Jul 15 23:58:04.875259 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:58:04.875997 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:58:04.879094 systemd[1]: Started sshd@6-10.0.0.136:22-10.0.0.1:35530.service - OpenSSH per-connection server daemon (10.0.0.1:35530). Jul 15 23:58:04.879678 systemd-logind[1548]: Removed session 6. Jul 15 23:58:04.937437 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 35530 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:58:04.939007 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:04.943725 systemd-logind[1548]: New session 7 of user core. Jul 15 23:58:04.957582 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:58:05.011990 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:58:05.012450 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:58:05.653189 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:58:05.671873 (dockerd)[1789]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:58:06.122827 dockerd[1789]: time="2025-07-15T23:58:06.122653707Z" level=info msg="Starting up" Jul 15 23:58:06.123786 dockerd[1789]: time="2025-07-15T23:58:06.123723492Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:58:07.451804 dockerd[1789]: time="2025-07-15T23:58:07.451732254Z" level=info msg="Loading containers: start." Jul 15 23:58:07.464420 kernel: Initializing XFRM netlink socket Jul 15 23:58:08.011230 systemd-networkd[1489]: docker0: Link UP Jul 15 23:58:08.018439 dockerd[1789]: time="2025-07-15T23:58:08.018357113Z" level=info msg="Loading containers: done." Jul 15 23:58:08.034583 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3666976466-merged.mount: Deactivated successfully. Jul 15 23:58:08.036119 dockerd[1789]: time="2025-07-15T23:58:08.036058277Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:58:08.036213 dockerd[1789]: time="2025-07-15T23:58:08.036176989Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:58:08.036410 dockerd[1789]: time="2025-07-15T23:58:08.036365713Z" level=info msg="Initializing buildkit" Jul 15 23:58:08.075217 dockerd[1789]: time="2025-07-15T23:58:08.075162383Z" level=info msg="Completed buildkit initialization" Jul 15 23:58:08.080591 dockerd[1789]: time="2025-07-15T23:58:08.080528222Z" level=info msg="Daemon has completed initialization" Jul 15 23:58:08.080727 dockerd[1789]: time="2025-07-15T23:58:08.080653056Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:58:08.081036 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:58:09.302456 containerd[1565]: time="2025-07-15T23:58:09.302363096Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Jul 15 23:58:10.028553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3710372547.mount: Deactivated successfully. Jul 15 23:58:12.331925 containerd[1565]: time="2025-07-15T23:58:12.331825556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:12.332935 containerd[1565]: time="2025-07-15T23:58:12.332867049Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Jul 15 23:58:12.334711 containerd[1565]: time="2025-07-15T23:58:12.334664308Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:12.337758 containerd[1565]: time="2025-07-15T23:58:12.337717902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:12.338803 containerd[1565]: time="2025-07-15T23:58:12.338765646Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 3.036329864s" Jul 15 23:58:12.338803 containerd[1565]: time="2025-07-15T23:58:12.338796725Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Jul 15 23:58:12.339436 containerd[1565]: time="2025-07-15T23:58:12.339367254Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Jul 15 23:58:14.255336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:58:14.258054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:58:14.618584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:58:14.637803 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:58:14.860654 kubelet[2068]: E0715 23:58:14.860561 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:58:14.869232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:58:14.869481 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:58:14.869949 systemd[1]: kubelet.service: Consumed 268ms CPU time, 111.5M memory peak. Jul 15 23:58:15.446255 containerd[1565]: time="2025-07-15T23:58:15.446175911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:15.471370 containerd[1565]: time="2025-07-15T23:58:15.471274782Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Jul 15 23:58:15.490554 containerd[1565]: time="2025-07-15T23:58:15.490489783Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:15.510631 containerd[1565]: time="2025-07-15T23:58:15.510557443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:15.511664 containerd[1565]: time="2025-07-15T23:58:15.511580210Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 3.17214528s" Jul 15 23:58:15.511664 containerd[1565]: time="2025-07-15T23:58:15.511636546Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Jul 15 23:58:15.512199 containerd[1565]: time="2025-07-15T23:58:15.512114923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Jul 15 23:58:17.368563 containerd[1565]: time="2025-07-15T23:58:17.368472671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:17.369574 containerd[1565]: time="2025-07-15T23:58:17.369495368Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Jul 15 23:58:17.370880 containerd[1565]: time="2025-07-15T23:58:17.370827034Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:17.373455 containerd[1565]: time="2025-07-15T23:58:17.373420085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:17.374315 containerd[1565]: time="2025-07-15T23:58:17.374260911Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.862080256s" Jul 15 23:58:17.374315 containerd[1565]: time="2025-07-15T23:58:17.374294605Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Jul 15 23:58:17.374822 containerd[1565]: time="2025-07-15T23:58:17.374785505Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Jul 15 23:58:18.773535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2148805052.mount: Deactivated successfully. Jul 15 23:58:19.609289 containerd[1565]: time="2025-07-15T23:58:19.609172482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:19.611360 containerd[1565]: time="2025-07-15T23:58:19.611291916Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Jul 15 23:58:19.615441 containerd[1565]: time="2025-07-15T23:58:19.615359551Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:19.619843 containerd[1565]: time="2025-07-15T23:58:19.619755943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:19.620570 containerd[1565]: time="2025-07-15T23:58:19.620506369Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 2.245689336s" Jul 15 23:58:19.620570 containerd[1565]: time="2025-07-15T23:58:19.620559790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Jul 15 23:58:19.621316 containerd[1565]: time="2025-07-15T23:58:19.621208055Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 23:58:20.235562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3235119291.mount: Deactivated successfully. Jul 15 23:58:22.203708 containerd[1565]: time="2025-07-15T23:58:22.203625816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:22.204556 containerd[1565]: time="2025-07-15T23:58:22.204480638Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 15 23:58:22.206075 containerd[1565]: time="2025-07-15T23:58:22.206023350Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:22.208801 containerd[1565]: time="2025-07-15T23:58:22.208755512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:22.209969 containerd[1565]: time="2025-07-15T23:58:22.209935003Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.58866919s" Jul 15 23:58:22.210047 containerd[1565]: time="2025-07-15T23:58:22.209971422Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 23:58:22.210530 containerd[1565]: time="2025-07-15T23:58:22.210495003Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:58:23.165563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776795271.mount: Deactivated successfully. Jul 15 23:58:23.173934 containerd[1565]: time="2025-07-15T23:58:23.173843568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:58:23.178016 containerd[1565]: time="2025-07-15T23:58:23.177968350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 23:58:23.180766 containerd[1565]: time="2025-07-15T23:58:23.180711823Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:58:23.183071 containerd[1565]: time="2025-07-15T23:58:23.182998470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:58:23.183624 containerd[1565]: time="2025-07-15T23:58:23.183576854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 973.052376ms" Jul 15 23:58:23.183624 containerd[1565]: time="2025-07-15T23:58:23.183616038Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 23:58:23.184233 containerd[1565]: time="2025-07-15T23:58:23.184071281Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 23:58:23.889020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2949922931.mount: Deactivated successfully. Jul 15 23:58:25.120416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 23:58:25.123242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:58:25.430560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:58:25.448928 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:58:25.515787 kubelet[2204]: E0715 23:58:25.515679 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:58:25.534311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:58:25.534567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:58:25.535112 systemd[1]: kubelet.service: Consumed 337ms CPU time, 111.4M memory peak. Jul 15 23:58:26.974286 containerd[1565]: time="2025-07-15T23:58:26.974212031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:26.976090 containerd[1565]: time="2025-07-15T23:58:26.976038054Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 15 23:58:26.977505 containerd[1565]: time="2025-07-15T23:58:26.977458958Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:26.980859 containerd[1565]: time="2025-07-15T23:58:26.980824237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:26.982303 containerd[1565]: time="2025-07-15T23:58:26.982241293Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.798134906s" Jul 15 23:58:26.982303 containerd[1565]: time="2025-07-15T23:58:26.982284745Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 15 23:58:29.971334 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:58:29.971579 systemd[1]: kubelet.service: Consumed 337ms CPU time, 111.4M memory peak. Jul 15 23:58:29.974452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:58:30.004644 systemd[1]: Reload requested from client PID 2244 ('systemctl') (unit session-7.scope)... Jul 15 23:58:30.004660 systemd[1]: Reloading... Jul 15 23:58:30.104410 zram_generator::config[2290]: No configuration found. Jul 15 23:58:30.822511 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:58:30.945401 systemd[1]: Reloading finished in 940 ms. Jul 15 23:58:31.022397 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 23:58:31.022516 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 23:58:31.022884 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:58:31.022934 systemd[1]: kubelet.service: Consumed 158ms CPU time, 98.3M memory peak. Jul 15 23:58:31.024985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:58:31.199890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:58:31.217741 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:58:31.252597 kubelet[2335]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:58:31.252597 kubelet[2335]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:58:31.252597 kubelet[2335]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:58:31.253026 kubelet[2335]: I0715 23:58:31.252633 2335 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:58:32.139941 kubelet[2335]: I0715 23:58:32.139889 2335 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 23:58:32.141012 kubelet[2335]: I0715 23:58:32.140989 2335 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:58:32.142212 kubelet[2335]: I0715 23:58:32.141924 2335 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 23:58:32.169591 kubelet[2335]: E0715 23:58:32.169509 2335 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:32.170459 kubelet[2335]: I0715 23:58:32.170424 2335 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:58:32.178835 kubelet[2335]: I0715 23:58:32.178796 2335 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:58:32.184401 kubelet[2335]: I0715 23:58:32.184353 2335 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:58:32.185911 kubelet[2335]: I0715 23:58:32.185645 2335 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:58:32.186094 kubelet[2335]: I0715 23:58:32.185898 2335 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:58:32.186322 kubelet[2335]: I0715 23:58:32.186101 2335 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:58:32.186322 kubelet[2335]: I0715 23:58:32.186113 2335 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 23:58:32.186322 kubelet[2335]: I0715 23:58:32.186310 2335 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:58:32.188850 kubelet[2335]: I0715 23:58:32.188811 2335 kubelet.go:446] "Attempting to sync node with API server" Jul 15 23:58:32.190181 kubelet[2335]: I0715 23:58:32.190144 2335 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:58:32.190181 kubelet[2335]: I0715 23:58:32.190184 2335 kubelet.go:352] "Adding apiserver pod source" Jul 15 23:58:32.190262 kubelet[2335]: I0715 23:58:32.190194 2335 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:58:32.193669 kubelet[2335]: W0715 23:58:32.193598 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jul 15 23:58:32.193669 kubelet[2335]: E0715 23:58:32.193655 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:32.193852 kubelet[2335]: I0715 23:58:32.193738 2335 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:58:32.193852 kubelet[2335]: W0715 23:58:32.193723 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jul 15 23:58:32.193852 kubelet[2335]: E0715 23:58:32.193786 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:32.194167 kubelet[2335]: I0715 23:58:32.194145 2335 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:58:32.194999 kubelet[2335]: W0715 23:58:32.194970 2335 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:58:32.196782 kubelet[2335]: I0715 23:58:32.196752 2335 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:58:32.196826 kubelet[2335]: I0715 23:58:32.196789 2335 server.go:1287] "Started kubelet" Jul 15 23:58:32.198802 kubelet[2335]: I0715 23:58:32.198747 2335 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:58:32.200032 kubelet[2335]: I0715 23:58:32.199435 2335 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:58:32.200032 kubelet[2335]: I0715 23:58:32.199572 2335 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:58:32.200032 kubelet[2335]: I0715 23:58:32.199797 2335 server.go:479] "Adding debug handlers to kubelet server" Jul 15 23:58:32.200126 kubelet[2335]: I0715 23:58:32.200102 2335 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:58:32.202179 kubelet[2335]: E0715 23:58:32.201725 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:32.202179 kubelet[2335]: I0715 23:58:32.201756 2335 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:58:32.202179 kubelet[2335]: I0715 23:58:32.201813 2335 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:58:32.202179 kubelet[2335]: I0715 23:58:32.201900 2335 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:58:32.202179 kubelet[2335]: I0715 23:58:32.201956 2335 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:58:32.202326 kubelet[2335]: W0715 23:58:32.202233 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jul 15 23:58:32.202425 kubelet[2335]: E0715 23:58:32.202263 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:32.202425 kubelet[2335]: E0715 23:58:32.201006 2335 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.136:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.136:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185292298f57fc56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 23:58:32.196766806 +0000 UTC m=+0.975203499,LastTimestamp:2025-07-15 23:58:32.196766806 +0000 UTC m=+0.975203499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 23:58:32.202889 kubelet[2335]: E0715 23:58:32.202834 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="200ms" Jul 15 23:58:32.203790 kubelet[2335]: E0715 23:58:32.203166 2335 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:58:32.203790 kubelet[2335]: I0715 23:58:32.203294 2335 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:58:32.203790 kubelet[2335]: I0715 23:58:32.203367 2335 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:58:32.204762 kubelet[2335]: I0715 23:58:32.204727 2335 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:58:32.221937 kubelet[2335]: I0715 23:58:32.221864 2335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:58:32.222937 kubelet[2335]: I0715 23:58:32.222895 2335 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:58:32.222937 kubelet[2335]: I0715 23:58:32.222916 2335 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:58:32.222937 kubelet[2335]: I0715 23:58:32.222935 2335 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:58:32.223992 kubelet[2335]: I0715 23:58:32.223915 2335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:58:32.223992 kubelet[2335]: I0715 23:58:32.223936 2335 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 23:58:32.223992 kubelet[2335]: I0715 23:58:32.223957 2335 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:58:32.223992 kubelet[2335]: I0715 23:58:32.223965 2335 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 23:58:32.224136 kubelet[2335]: E0715 23:58:32.224026 2335 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:58:32.224943 kubelet[2335]: W0715 23:58:32.224782 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jul 15 23:58:32.224943 kubelet[2335]: E0715 23:58:32.224833 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:32.302025 kubelet[2335]: E0715 23:58:32.301947 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:32.324349 kubelet[2335]: E0715 23:58:32.324274 2335 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:58:32.402947 kubelet[2335]: E0715 23:58:32.402778 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:32.404462 kubelet[2335]: E0715 23:58:32.404374 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="400ms" Jul 15 23:58:32.503790 kubelet[2335]: E0715 23:58:32.503714 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:32.524950 kubelet[2335]: E0715 23:58:32.524897 2335 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:58:32.604508 kubelet[2335]: E0715 23:58:32.604438 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:32.705624 kubelet[2335]: E0715 23:58:32.705503 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:32.805505 kubelet[2335]: E0715 23:58:32.805444 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="800ms" Jul 15 23:58:32.806411 kubelet[2335]: E0715 23:58:32.806336 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:32.907097 kubelet[2335]: E0715 23:58:32.907035 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:32.925277 kubelet[2335]: E0715 23:58:32.925224 2335 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:58:33.008084 kubelet[2335]: E0715 23:58:33.007914 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:33.108691 kubelet[2335]: E0715 23:58:33.108611 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:33.146278 kubelet[2335]: W0715 23:58:33.146209 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jul 15 23:58:33.146278 kubelet[2335]: E0715 23:58:33.146275 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:33.157884 kubelet[2335]: W0715 23:58:33.157832 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jul 15 23:58:33.157884 kubelet[2335]: E0715 23:58:33.157877 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:33.208909 kubelet[2335]: E0715 23:58:33.208836 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:33.309431 kubelet[2335]: E0715 23:58:33.309267 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:33.410009 kubelet[2335]: E0715 23:58:33.409933 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:33.510927 kubelet[2335]: E0715 23:58:33.510834 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:33.588858 kubelet[2335]: W0715 23:58:33.588652 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jul 15 23:58:33.588858 kubelet[2335]: E0715 23:58:33.588722 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:33.607167 kubelet[2335]: E0715 23:58:33.607095 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="1.6s" Jul 15 23:58:33.611296 kubelet[2335]: E0715 23:58:33.611238 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:33.655089 kubelet[2335]: W0715 23:58:33.655010 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jul 15 23:58:33.655089 kubelet[2335]: E0715 23:58:33.655084 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:33.711650 kubelet[2335]: E0715 23:58:33.711567 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:33.725791 kubelet[2335]: E0715 23:58:33.725747 2335 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:58:33.812549 kubelet[2335]: E0715 23:58:33.812452 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:33.864423 kubelet[2335]: I0715 23:58:33.864195 2335 policy_none.go:49] "None policy: Start" Jul 15 23:58:33.864423 kubelet[2335]: I0715 23:58:33.864245 2335 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:58:33.864423 kubelet[2335]: I0715 23:58:33.864263 2335 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:58:33.913198 kubelet[2335]: E0715 23:58:33.913148 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:33.935201 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:58:33.946828 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:58:33.950313 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:58:33.968935 kubelet[2335]: I0715 23:58:33.968896 2335 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:58:33.969216 kubelet[2335]: I0715 23:58:33.969199 2335 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:58:33.969269 kubelet[2335]: I0715 23:58:33.969215 2335 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:58:33.969567 kubelet[2335]: I0715 23:58:33.969541 2335 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:58:33.970332 kubelet[2335]: E0715 23:58:33.970312 2335 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:58:33.970409 kubelet[2335]: E0715 23:58:33.970347 2335 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 23:58:34.070905 kubelet[2335]: I0715 23:58:34.070851 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:58:34.071448 kubelet[2335]: E0715 23:58:34.071374 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jul 15 23:58:34.273990 kubelet[2335]: I0715 23:58:34.273841 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:58:34.274372 kubelet[2335]: E0715 23:58:34.274325 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jul 15 23:58:34.302593 kubelet[2335]: E0715 23:58:34.302535 2335 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:34.676091 kubelet[2335]: I0715 23:58:34.675943 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:58:34.676558 kubelet[2335]: E0715 23:58:34.676327 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jul 15 23:58:35.208468 kubelet[2335]: E0715 23:58:35.208408 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="3.2s" Jul 15 23:58:35.336298 systemd[1]: Created slice kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice - libcontainer container kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice. Jul 15 23:58:35.349271 kubelet[2335]: E0715 23:58:35.349227 2335 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:58:35.351454 systemd[1]: Created slice kubepods-burstable-podbdc1916553ffd86f3c2dd546690f3e64.slice - libcontainer container kubepods-burstable-podbdc1916553ffd86f3c2dd546690f3e64.slice. Jul 15 23:58:35.354166 kubelet[2335]: E0715 23:58:35.354121 2335 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:58:35.357290 systemd[1]: Created slice kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice - libcontainer container kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice. Jul 15 23:58:35.358924 kubelet[2335]: E0715 23:58:35.358888 2335 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:58:35.422550 kubelet[2335]: I0715 23:58:35.422452 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bdc1916553ffd86f3c2dd546690f3e64-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bdc1916553ffd86f3c2dd546690f3e64\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:58:35.422550 kubelet[2335]: I0715 23:58:35.422504 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:58:35.422550 kubelet[2335]: I0715 23:58:35.422559 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:58:35.422809 kubelet[2335]: I0715 23:58:35.422586 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:58:35.422809 kubelet[2335]: I0715 23:58:35.422606 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:58:35.422809 kubelet[2335]: I0715 23:58:35.422624 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:58:35.422809 kubelet[2335]: I0715 23:58:35.422643 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bdc1916553ffd86f3c2dd546690f3e64-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bdc1916553ffd86f3c2dd546690f3e64\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:58:35.422809 kubelet[2335]: I0715 23:58:35.422661 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bdc1916553ffd86f3c2dd546690f3e64-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bdc1916553ffd86f3c2dd546690f3e64\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:58:35.422965 kubelet[2335]: I0715 23:58:35.422680 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:58:35.477934 kubelet[2335]: I0715 23:58:35.477812 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:58:35.478220 kubelet[2335]: E0715 23:58:35.478193 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jul 15 23:58:35.524344 kubelet[2335]: W0715 23:58:35.524274 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jul 15 23:58:35.524344 kubelet[2335]: E0715 23:58:35.524330 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:35.650498 kubelet[2335]: E0715 23:58:35.650438 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:35.651183 containerd[1565]: time="2025-07-15T23:58:35.651137075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,}" Jul 15 23:58:35.655573 kubelet[2335]: E0715 23:58:35.655513 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:35.656259 containerd[1565]: time="2025-07-15T23:58:35.656104072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bdc1916553ffd86f3c2dd546690f3e64,Namespace:kube-system,Attempt:0,}" Jul 15 23:58:35.656427 kubelet[2335]: W0715 23:58:35.656347 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jul 15 23:58:35.656427 kubelet[2335]: E0715 23:58:35.656414 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:35.659864 kubelet[2335]: E0715 23:58:35.659839 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:35.660470 containerd[1565]: time="2025-07-15T23:58:35.660424242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,}" Jul 15 23:58:35.796014 kubelet[2335]: W0715 23:58:35.795852 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jul 15 23:58:35.796014 kubelet[2335]: E0715 23:58:35.795910 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:35.810743 containerd[1565]: time="2025-07-15T23:58:35.810595758Z" level=info msg="connecting to shim 6e37baeea9e790a797dae0fcc7016b0b8460cdcb9b922a9e2289ee75d9a0ed0b" address="unix:///run/containerd/s/5fc9a19903f7bd7eb9cdc335ebf7ed578a94a2acb3757ab9957fb2305d8c47cf" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:35.813133 containerd[1565]: time="2025-07-15T23:58:35.813092596Z" level=info msg="connecting to shim b81670396e771a4deaca756e566dff13592bf56c737d7af6b4fb97f1c4c0c2e0" address="unix:///run/containerd/s/52efcc4c18fba7f122f1a4dffd1b7fcaecfc045e4f655068cad199c2ba2f2131" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:35.813838 containerd[1565]: time="2025-07-15T23:58:35.813815298Z" level=info msg="connecting to shim 6c487cee418ac83eb73ac73390a1104888153f04d373a647710a1892fc4b0cda" address="unix:///run/containerd/s/21ecba47e4966b9fa40c8003ebe4a3f98f1c2228efe647fcf9ea0701c16594e6" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:35.838978 kubelet[2335]: W0715 23:58:35.838929 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jul 15 23:58:35.839114 kubelet[2335]: E0715 23:58:35.838991 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:58:35.843573 systemd[1]: Started cri-containerd-b81670396e771a4deaca756e566dff13592bf56c737d7af6b4fb97f1c4c0c2e0.scope - libcontainer container b81670396e771a4deaca756e566dff13592bf56c737d7af6b4fb97f1c4c0c2e0. Jul 15 23:58:35.849098 systemd[1]: Started cri-containerd-6c487cee418ac83eb73ac73390a1104888153f04d373a647710a1892fc4b0cda.scope - libcontainer container 6c487cee418ac83eb73ac73390a1104888153f04d373a647710a1892fc4b0cda. Jul 15 23:58:35.851290 systemd[1]: Started cri-containerd-6e37baeea9e790a797dae0fcc7016b0b8460cdcb9b922a9e2289ee75d9a0ed0b.scope - libcontainer container 6e37baeea9e790a797dae0fcc7016b0b8460cdcb9b922a9e2289ee75d9a0ed0b. Jul 15 23:58:35.909284 containerd[1565]: time="2025-07-15T23:58:35.909205927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,} returns sandbox id \"b81670396e771a4deaca756e566dff13592bf56c737d7af6b4fb97f1c4c0c2e0\"" Jul 15 23:58:35.911144 containerd[1565]: time="2025-07-15T23:58:35.911028337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c487cee418ac83eb73ac73390a1104888153f04d373a647710a1892fc4b0cda\"" Jul 15 23:58:35.912172 kubelet[2335]: E0715 23:58:35.912145 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:35.913807 kubelet[2335]: E0715 23:58:35.913707 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:35.915949 containerd[1565]: time="2025-07-15T23:58:35.915809646Z" level=info msg="CreateContainer within sandbox \"b81670396e771a4deaca756e566dff13592bf56c737d7af6b4fb97f1c4c0c2e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:58:35.916139 containerd[1565]: time="2025-07-15T23:58:35.916112509Z" level=info msg="CreateContainer within sandbox \"6c487cee418ac83eb73ac73390a1104888153f04d373a647710a1892fc4b0cda\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:58:35.916736 containerd[1565]: time="2025-07-15T23:58:35.916709830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bdc1916553ffd86f3c2dd546690f3e64,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e37baeea9e790a797dae0fcc7016b0b8460cdcb9b922a9e2289ee75d9a0ed0b\"" Jul 15 23:58:35.917480 kubelet[2335]: E0715 23:58:35.917396 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:35.919249 containerd[1565]: time="2025-07-15T23:58:35.919195106Z" level=info msg="CreateContainer within sandbox \"6e37baeea9e790a797dae0fcc7016b0b8460cdcb9b922a9e2289ee75d9a0ed0b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:58:35.933717 containerd[1565]: time="2025-07-15T23:58:35.933649375Z" level=info msg="Container 67281e8b466152d1ca1eb2e2318ead1fa98de6ca182d0721aa3215dba664c2cb: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:35.934293 containerd[1565]: time="2025-07-15T23:58:35.934231536Z" level=info msg="Container 9f9dc898c3b1c93d599bb9922880c519983cce014d85f1409be7c31dac8f55f8: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:35.937655 containerd[1565]: time="2025-07-15T23:58:35.937598581Z" level=info msg="Container 3d6f6ec155c640e45defe38acae367eb8ddb1ba4531cf95b682d6240802efc75: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:35.946418 containerd[1565]: time="2025-07-15T23:58:35.945211523Z" level=info msg="CreateContainer within sandbox \"b81670396e771a4deaca756e566dff13592bf56c737d7af6b4fb97f1c4c0c2e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"67281e8b466152d1ca1eb2e2318ead1fa98de6ca182d0721aa3215dba664c2cb\"" Jul 15 23:58:35.949139 containerd[1565]: time="2025-07-15T23:58:35.949075024Z" level=info msg="StartContainer for \"67281e8b466152d1ca1eb2e2318ead1fa98de6ca182d0721aa3215dba664c2cb\"" Jul 15 23:58:35.950714 containerd[1565]: time="2025-07-15T23:58:35.950683442Z" level=info msg="connecting to shim 67281e8b466152d1ca1eb2e2318ead1fa98de6ca182d0721aa3215dba664c2cb" address="unix:///run/containerd/s/52efcc4c18fba7f122f1a4dffd1b7fcaecfc045e4f655068cad199c2ba2f2131" protocol=ttrpc version=3 Jul 15 23:58:35.958034 containerd[1565]: time="2025-07-15T23:58:35.957983792Z" level=info msg="CreateContainer within sandbox \"6e37baeea9e790a797dae0fcc7016b0b8460cdcb9b922a9e2289ee75d9a0ed0b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3d6f6ec155c640e45defe38acae367eb8ddb1ba4531cf95b682d6240802efc75\"" Jul 15 23:58:35.959423 containerd[1565]: time="2025-07-15T23:58:35.958689341Z" level=info msg="StartContainer for \"3d6f6ec155c640e45defe38acae367eb8ddb1ba4531cf95b682d6240802efc75\"" Jul 15 23:58:35.960328 containerd[1565]: time="2025-07-15T23:58:35.960288311Z" level=info msg="connecting to shim 3d6f6ec155c640e45defe38acae367eb8ddb1ba4531cf95b682d6240802efc75" address="unix:///run/containerd/s/5fc9a19903f7bd7eb9cdc335ebf7ed578a94a2acb3757ab9957fb2305d8c47cf" protocol=ttrpc version=3 Jul 15 23:58:35.961200 containerd[1565]: time="2025-07-15T23:58:35.961164738Z" level=info msg="CreateContainer within sandbox \"6c487cee418ac83eb73ac73390a1104888153f04d373a647710a1892fc4b0cda\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9f9dc898c3b1c93d599bb9922880c519983cce014d85f1409be7c31dac8f55f8\"" Jul 15 23:58:35.961602 containerd[1565]: time="2025-07-15T23:58:35.961575740Z" level=info msg="StartContainer for \"9f9dc898c3b1c93d599bb9922880c519983cce014d85f1409be7c31dac8f55f8\"" Jul 15 23:58:35.963131 containerd[1565]: time="2025-07-15T23:58:35.963090798Z" level=info msg="connecting to shim 9f9dc898c3b1c93d599bb9922880c519983cce014d85f1409be7c31dac8f55f8" address="unix:///run/containerd/s/21ecba47e4966b9fa40c8003ebe4a3f98f1c2228efe647fcf9ea0701c16594e6" protocol=ttrpc version=3 Jul 15 23:58:35.976042 systemd[1]: Started cri-containerd-67281e8b466152d1ca1eb2e2318ead1fa98de6ca182d0721aa3215dba664c2cb.scope - libcontainer container 67281e8b466152d1ca1eb2e2318ead1fa98de6ca182d0721aa3215dba664c2cb. Jul 15 23:58:35.987675 systemd[1]: Started cri-containerd-3d6f6ec155c640e45defe38acae367eb8ddb1ba4531cf95b682d6240802efc75.scope - libcontainer container 3d6f6ec155c640e45defe38acae367eb8ddb1ba4531cf95b682d6240802efc75. Jul 15 23:58:35.998646 systemd[1]: Started cri-containerd-9f9dc898c3b1c93d599bb9922880c519983cce014d85f1409be7c31dac8f55f8.scope - libcontainer container 9f9dc898c3b1c93d599bb9922880c519983cce014d85f1409be7c31dac8f55f8. Jul 15 23:58:36.055259 containerd[1565]: time="2025-07-15T23:58:36.054523421Z" level=info msg="StartContainer for \"67281e8b466152d1ca1eb2e2318ead1fa98de6ca182d0721aa3215dba664c2cb\" returns successfully" Jul 15 23:58:36.061189 containerd[1565]: time="2025-07-15T23:58:36.061127819Z" level=info msg="StartContainer for \"3d6f6ec155c640e45defe38acae367eb8ddb1ba4531cf95b682d6240802efc75\" returns successfully" Jul 15 23:58:36.077832 containerd[1565]: time="2025-07-15T23:58:36.077767754Z" level=info msg="StartContainer for \"9f9dc898c3b1c93d599bb9922880c519983cce014d85f1409be7c31dac8f55f8\" returns successfully" Jul 15 23:58:36.237778 kubelet[2335]: E0715 23:58:36.237724 2335 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:58:36.237919 kubelet[2335]: E0715 23:58:36.237876 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:36.240232 kubelet[2335]: E0715 23:58:36.240196 2335 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:58:36.240328 kubelet[2335]: E0715 23:58:36.240302 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:36.243406 kubelet[2335]: E0715 23:58:36.243359 2335 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:58:36.243528 kubelet[2335]: E0715 23:58:36.243498 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:37.082634 kubelet[2335]: I0715 23:58:37.082566 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:58:37.196776 kubelet[2335]: I0715 23:58:37.196709 2335 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 23:58:37.196776 kubelet[2335]: E0715 23:58:37.196748 2335 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 23:58:37.206937 kubelet[2335]: E0715 23:58:37.206895 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:37.242535 kubelet[2335]: E0715 23:58:37.242408 2335 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.185292298f57fc56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 23:58:32.196766806 +0000 UTC m=+0.975203499,LastTimestamp:2025-07-15 23:58:32.196766806 +0000 UTC m=+0.975203499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 23:58:37.245427 kubelet[2335]: E0715 23:58:37.245406 2335 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:58:37.245596 kubelet[2335]: E0715 23:58:37.245529 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:37.245920 kubelet[2335]: E0715 23:58:37.245858 2335 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:58:37.246007 kubelet[2335]: E0715 23:58:37.245922 2335 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:58:37.246085 kubelet[2335]: E0715 23:58:37.246064 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:37.246140 kubelet[2335]: E0715 23:58:37.246067 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:37.307674 kubelet[2335]: E0715 23:58:37.307617 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:37.408303 kubelet[2335]: E0715 23:58:37.408153 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:37.508924 kubelet[2335]: E0715 23:58:37.508857 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:37.609886 kubelet[2335]: E0715 23:58:37.609808 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:37.710608 kubelet[2335]: E0715 23:58:37.710431 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:37.811241 kubelet[2335]: E0715 23:58:37.811159 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:37.912136 kubelet[2335]: E0715 23:58:37.912050 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:38.013123 kubelet[2335]: E0715 23:58:38.012977 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:38.113912 kubelet[2335]: E0715 23:58:38.113847 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:38.214056 kubelet[2335]: E0715 23:58:38.214002 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:38.315307 kubelet[2335]: E0715 23:58:38.315136 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:38.415569 kubelet[2335]: E0715 23:58:38.415524 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:38.516368 kubelet[2335]: E0715 23:58:38.516305 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:38.617087 kubelet[2335]: E0715 23:58:38.616934 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:38.717582 kubelet[2335]: E0715 23:58:38.717507 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:38.818160 kubelet[2335]: E0715 23:58:38.818101 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:38.918958 kubelet[2335]: E0715 23:58:38.918803 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:39.019182 kubelet[2335]: E0715 23:58:39.019092 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:39.119921 kubelet[2335]: E0715 23:58:39.119874 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:39.220271 kubelet[2335]: E0715 23:58:39.220135 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:39.321040 kubelet[2335]: E0715 23:58:39.320982 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:39.421951 kubelet[2335]: E0715 23:58:39.421890 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:39.523017 kubelet[2335]: E0715 23:58:39.522835 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:39.623705 kubelet[2335]: E0715 23:58:39.623636 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:39.724145 kubelet[2335]: E0715 23:58:39.724053 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:39.825015 kubelet[2335]: E0715 23:58:39.824859 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:39.925737 kubelet[2335]: E0715 23:58:39.925660 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:40.026914 kubelet[2335]: E0715 23:58:40.026841 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:40.127848 kubelet[2335]: E0715 23:58:40.127678 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:40.228406 kubelet[2335]: E0715 23:58:40.228271 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:40.328544 kubelet[2335]: E0715 23:58:40.328469 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:40.428794 kubelet[2335]: E0715 23:58:40.428606 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:40.457668 kubelet[2335]: E0715 23:58:40.457634 2335 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:58:40.457820 kubelet[2335]: E0715 23:58:40.457779 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:40.529741 kubelet[2335]: E0715 23:58:40.529666 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:40.630716 kubelet[2335]: E0715 23:58:40.630652 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:40.731145 kubelet[2335]: E0715 23:58:40.731087 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:40.831933 kubelet[2335]: E0715 23:58:40.831869 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:40.932775 kubelet[2335]: E0715 23:58:40.932698 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:41.034043 kubelet[2335]: E0715 23:58:41.033858 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:41.081430 systemd[1]: Reload requested from client PID 2614 ('systemctl') (unit session-7.scope)... Jul 15 23:58:41.081449 systemd[1]: Reloading... Jul 15 23:58:41.134485 kubelet[2335]: E0715 23:58:41.134450 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:41.177432 zram_generator::config[2657]: No configuration found. Jul 15 23:58:41.234731 kubelet[2335]: E0715 23:58:41.234682 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:41.294335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:58:41.335695 kubelet[2335]: E0715 23:58:41.335638 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:41.436331 kubelet[2335]: E0715 23:58:41.436289 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:41.443093 systemd[1]: Reloading finished in 361 ms. Jul 15 23:58:41.475299 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:58:41.491634 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:58:41.491938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:58:41.492005 systemd[1]: kubelet.service: Consumed 953ms CPU time, 131.9M memory peak. Jul 15 23:58:41.494216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:58:41.729214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:58:41.746015 (kubelet)[2702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:58:41.799407 kubelet[2702]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:58:41.799407 kubelet[2702]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:58:41.799407 kubelet[2702]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:58:41.799895 kubelet[2702]: I0715 23:58:41.799465 2702 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:58:41.809232 kubelet[2702]: I0715 23:58:41.809163 2702 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 23:58:41.809232 kubelet[2702]: I0715 23:58:41.809197 2702 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:58:41.809525 kubelet[2702]: I0715 23:58:41.809502 2702 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 23:58:41.810773 kubelet[2702]: I0715 23:58:41.810736 2702 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 23:58:41.813718 kubelet[2702]: I0715 23:58:41.813670 2702 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:58:41.817842 kubelet[2702]: I0715 23:58:41.817817 2702 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:58:41.825597 kubelet[2702]: I0715 23:58:41.825548 2702 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:58:41.825831 kubelet[2702]: I0715 23:58:41.825793 2702 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:58:41.825999 kubelet[2702]: I0715 23:58:41.825820 2702 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:58:41.825999 kubelet[2702]: I0715 23:58:41.826001 2702 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:58:41.826145 kubelet[2702]: I0715 23:58:41.826011 2702 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 23:58:41.826145 kubelet[2702]: I0715 23:58:41.826063 2702 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:58:41.826233 kubelet[2702]: I0715 23:58:41.826216 2702 kubelet.go:446] "Attempting to sync node with API server" Jul 15 23:58:41.826273 kubelet[2702]: I0715 23:58:41.826240 2702 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:58:41.826273 kubelet[2702]: I0715 23:58:41.826261 2702 kubelet.go:352] "Adding apiserver pod source" Jul 15 23:58:41.826273 kubelet[2702]: I0715 23:58:41.826272 2702 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:58:41.827417 kubelet[2702]: I0715 23:58:41.827340 2702 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:58:41.827810 kubelet[2702]: I0715 23:58:41.827768 2702 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:58:41.828287 kubelet[2702]: I0715 23:58:41.828206 2702 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:58:41.828287 kubelet[2702]: I0715 23:58:41.828235 2702 server.go:1287] "Started kubelet" Jul 15 23:58:41.830093 kubelet[2702]: I0715 23:58:41.830061 2702 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:58:41.833760 kubelet[2702]: I0715 23:58:41.833070 2702 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:58:41.834101 kubelet[2702]: I0715 23:58:41.834080 2702 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:58:41.834710 kubelet[2702]: I0715 23:58:41.834685 2702 server.go:479] "Adding debug handlers to kubelet server" Jul 15 23:58:41.835769 kubelet[2702]: E0715 23:58:41.835728 2702 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:58:41.835976 kubelet[2702]: I0715 23:58:41.835924 2702 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:58:41.836286 kubelet[2702]: I0715 23:58:41.836261 2702 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:58:41.836688 kubelet[2702]: I0715 23:58:41.836648 2702 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:58:41.836770 kubelet[2702]: I0715 23:58:41.836737 2702 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:58:41.836901 kubelet[2702]: I0715 23:58:41.836878 2702 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:58:41.836978 kubelet[2702]: E0715 23:58:41.836944 2702 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:58:41.837940 kubelet[2702]: I0715 23:58:41.837917 2702 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:58:41.838117 kubelet[2702]: I0715 23:58:41.838092 2702 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:58:41.843130 kubelet[2702]: I0715 23:58:41.843090 2702 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:58:41.853711 kubelet[2702]: I0715 23:58:41.853609 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:58:41.856106 kubelet[2702]: I0715 23:58:41.855995 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:58:41.856106 kubelet[2702]: I0715 23:58:41.856050 2702 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 23:58:41.856106 kubelet[2702]: I0715 23:58:41.856085 2702 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:58:41.856106 kubelet[2702]: I0715 23:58:41.856097 2702 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 23:58:41.856610 kubelet[2702]: E0715 23:58:41.856171 2702 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:58:41.886448 kubelet[2702]: I0715 23:58:41.886410 2702 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:58:41.886448 kubelet[2702]: I0715 23:58:41.886432 2702 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:58:41.886448 kubelet[2702]: I0715 23:58:41.886455 2702 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:58:41.886693 kubelet[2702]: I0715 23:58:41.886654 2702 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:58:41.886693 kubelet[2702]: I0715 23:58:41.886667 2702 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:58:41.886693 kubelet[2702]: I0715 23:58:41.886690 2702 policy_none.go:49] "None policy: Start" Jul 15 23:58:41.886785 kubelet[2702]: I0715 23:58:41.886701 2702 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:58:41.886785 kubelet[2702]: I0715 23:58:41.886713 2702 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:58:41.886857 kubelet[2702]: I0715 23:58:41.886838 2702 state_mem.go:75] "Updated machine memory state" Jul 15 23:58:41.892612 kubelet[2702]: I0715 23:58:41.892549 2702 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:58:41.893117 kubelet[2702]: I0715 23:58:41.892870 2702 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:58:41.893117 kubelet[2702]: I0715 23:58:41.892886 2702 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:58:41.893181 kubelet[2702]: I0715 23:58:41.893153 2702 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:58:41.894802 kubelet[2702]: E0715 23:58:41.894772 2702 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:58:41.957017 kubelet[2702]: I0715 23:58:41.956968 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 23:58:41.957404 kubelet[2702]: I0715 23:58:41.957198 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 23:58:41.957452 kubelet[2702]: I0715 23:58:41.957403 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:58:42.000408 kubelet[2702]: I0715 23:58:41.999871 2702 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:58:42.010142 kubelet[2702]: I0715 23:58:42.010099 2702 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 23:58:42.010293 kubelet[2702]: I0715 23:58:42.010194 2702 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 23:58:42.137956 kubelet[2702]: I0715 23:58:42.137876 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bdc1916553ffd86f3c2dd546690f3e64-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bdc1916553ffd86f3c2dd546690f3e64\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:58:42.137956 kubelet[2702]: I0715 23:58:42.137938 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:58:42.137956 kubelet[2702]: I0715 23:58:42.137968 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:58:42.138174 kubelet[2702]: I0715 23:58:42.137988 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:58:42.138174 kubelet[2702]: I0715 23:58:42.138012 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:58:42.138174 kubelet[2702]: I0715 23:58:42.138034 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bdc1916553ffd86f3c2dd546690f3e64-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bdc1916553ffd86f3c2dd546690f3e64\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:58:42.138174 kubelet[2702]: I0715 23:58:42.138052 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bdc1916553ffd86f3c2dd546690f3e64-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bdc1916553ffd86f3c2dd546690f3e64\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:58:42.138174 kubelet[2702]: I0715 23:58:42.138074 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:58:42.138415 kubelet[2702]: I0715 23:58:42.138095 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:58:42.266066 kubelet[2702]: E0715 23:58:42.265908 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:42.267028 kubelet[2702]: E0715 23:58:42.266685 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:42.267028 kubelet[2702]: E0715 23:58:42.266932 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:42.827516 kubelet[2702]: I0715 23:58:42.827434 2702 apiserver.go:52] "Watching apiserver" Jul 15 23:58:42.837852 kubelet[2702]: I0715 23:58:42.837790 2702 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:58:42.873413 kubelet[2702]: E0715 23:58:42.873316 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:42.873549 kubelet[2702]: E0715 23:58:42.873502 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:42.873549 kubelet[2702]: E0715 23:58:42.873524 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:43.835881 kubelet[2702]: I0715 23:58:43.835816 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.835797857 podStartE2EDuration="2.835797857s" podCreationTimestamp="2025-07-15 23:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:58:43.202693291 +0000 UTC m=+1.451146752" watchObservedRunningTime="2025-07-15 23:58:43.835797857 +0000 UTC m=+2.084251318" Jul 15 23:58:43.874029 kubelet[2702]: E0715 23:58:43.873987 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:43.874176 kubelet[2702]: E0715 23:58:43.874079 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:43.882419 kubelet[2702]: I0715 23:58:43.882305 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.882282492 podStartE2EDuration="2.882282492s" podCreationTimestamp="2025-07-15 23:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:58:43.835979342 +0000 UTC m=+2.084432803" watchObservedRunningTime="2025-07-15 23:58:43.882282492 +0000 UTC m=+2.130735953" Jul 15 23:58:43.882688 kubelet[2702]: I0715 23:58:43.882516 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.882508342 podStartE2EDuration="2.882508342s" podCreationTimestamp="2025-07-15 23:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:58:43.881107203 +0000 UTC m=+2.129560674" watchObservedRunningTime="2025-07-15 23:58:43.882508342 +0000 UTC m=+2.130961803" Jul 15 23:58:46.013700 kubelet[2702]: I0715 23:58:46.013655 2702 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:58:46.014220 containerd[1565]: time="2025-07-15T23:58:46.014052989Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:58:46.014484 kubelet[2702]: I0715 23:58:46.014249 2702 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:58:46.519374 update_engine[1555]: I20250715 23:58:46.519268 1555 update_attempter.cc:509] Updating boot flags... Jul 15 23:58:46.974988 kubelet[2702]: E0715 23:58:46.974920 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:47.093368 systemd[1]: Created slice kubepods-besteffort-podcb5572cf_a9d5_4ed3_9da4_eed01ce46060.slice - libcontainer container kubepods-besteffort-podcb5572cf_a9d5_4ed3_9da4_eed01ce46060.slice. Jul 15 23:58:47.116520 systemd[1]: Created slice kubepods-besteffort-pod145824d8_c26b_4edc_a9a6_8f2255eaab8d.slice - libcontainer container kubepods-besteffort-pod145824d8_c26b_4edc_a9a6_8f2255eaab8d.slice. Jul 15 23:58:47.169858 kubelet[2702]: I0715 23:58:47.169763 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfwf9\" (UniqueName: \"kubernetes.io/projected/145824d8-c26b-4edc-a9a6-8f2255eaab8d-kube-api-access-pfwf9\") pod \"tigera-operator-747864d56d-q9gwb\" (UID: \"145824d8-c26b-4edc-a9a6-8f2255eaab8d\") " pod="tigera-operator/tigera-operator-747864d56d-q9gwb" Jul 15 23:58:47.169858 kubelet[2702]: I0715 23:58:47.169848 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkqck\" (UniqueName: \"kubernetes.io/projected/cb5572cf-a9d5-4ed3-9da4-eed01ce46060-kube-api-access-gkqck\") pod \"kube-proxy-t9pc7\" (UID: \"cb5572cf-a9d5-4ed3-9da4-eed01ce46060\") " pod="kube-system/kube-proxy-t9pc7" Jul 15 23:58:47.169858 kubelet[2702]: I0715 23:58:47.169877 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb5572cf-a9d5-4ed3-9da4-eed01ce46060-kube-proxy\") pod \"kube-proxy-t9pc7\" (UID: \"cb5572cf-a9d5-4ed3-9da4-eed01ce46060\") " pod="kube-system/kube-proxy-t9pc7" Jul 15 23:58:47.170481 kubelet[2702]: I0715 23:58:47.169893 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb5572cf-a9d5-4ed3-9da4-eed01ce46060-xtables-lock\") pod \"kube-proxy-t9pc7\" (UID: \"cb5572cf-a9d5-4ed3-9da4-eed01ce46060\") " pod="kube-system/kube-proxy-t9pc7" Jul 15 23:58:47.170481 kubelet[2702]: I0715 23:58:47.169908 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/145824d8-c26b-4edc-a9a6-8f2255eaab8d-var-lib-calico\") pod \"tigera-operator-747864d56d-q9gwb\" (UID: \"145824d8-c26b-4edc-a9a6-8f2255eaab8d\") " pod="tigera-operator/tigera-operator-747864d56d-q9gwb" Jul 15 23:58:47.170481 kubelet[2702]: I0715 23:58:47.169925 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb5572cf-a9d5-4ed3-9da4-eed01ce46060-lib-modules\") pod \"kube-proxy-t9pc7\" (UID: \"cb5572cf-a9d5-4ed3-9da4-eed01ce46060\") " pod="kube-system/kube-proxy-t9pc7" Jul 15 23:58:47.404122 kubelet[2702]: E0715 23:58:47.403929 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:47.404956 containerd[1565]: time="2025-07-15T23:58:47.404780489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t9pc7,Uid:cb5572cf-a9d5-4ed3-9da4-eed01ce46060,Namespace:kube-system,Attempt:0,}" Jul 15 23:58:47.420100 containerd[1565]: time="2025-07-15T23:58:47.420044024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-q9gwb,Uid:145824d8-c26b-4edc-a9a6-8f2255eaab8d,Namespace:tigera-operator,Attempt:0,}" Jul 15 23:58:47.426579 containerd[1565]: time="2025-07-15T23:58:47.426494720Z" level=info msg="connecting to shim 6d710455531b6a079eda65bd09ec479a08d03f2d4114c0bd14804d1edbfc5186" address="unix:///run/containerd/s/a7f6e668cb2560f4accaedaf84697faf63f67a90829d8c29fa7dac7e1430bbf2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:47.451829 containerd[1565]: time="2025-07-15T23:58:47.451762265Z" level=info msg="connecting to shim 6b53f4356cf920dc5a3012cadbfef5e8a288604ff667519ebd7d7150f5059f67" address="unix:///run/containerd/s/9ee47fef6b9db7cf1e789ffb8a482eabfed219d39cf40952d345d5fce6e20dcb" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:47.456664 systemd[1]: Started cri-containerd-6d710455531b6a079eda65bd09ec479a08d03f2d4114c0bd14804d1edbfc5186.scope - libcontainer container 6d710455531b6a079eda65bd09ec479a08d03f2d4114c0bd14804d1edbfc5186. Jul 15 23:58:47.478544 systemd[1]: Started cri-containerd-6b53f4356cf920dc5a3012cadbfef5e8a288604ff667519ebd7d7150f5059f67.scope - libcontainer container 6b53f4356cf920dc5a3012cadbfef5e8a288604ff667519ebd7d7150f5059f67. Jul 15 23:58:47.487731 containerd[1565]: time="2025-07-15T23:58:47.487688193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t9pc7,Uid:cb5572cf-a9d5-4ed3-9da4-eed01ce46060,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d710455531b6a079eda65bd09ec479a08d03f2d4114c0bd14804d1edbfc5186\"" Jul 15 23:58:47.488406 kubelet[2702]: E0715 23:58:47.488363 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:47.490715 containerd[1565]: time="2025-07-15T23:58:47.490687597Z" level=info msg="CreateContainer within sandbox \"6d710455531b6a079eda65bd09ec479a08d03f2d4114c0bd14804d1edbfc5186\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:58:47.504416 containerd[1565]: time="2025-07-15T23:58:47.504057456Z" level=info msg="Container 53035db9cb540eba03251541d01e9a41dcdf75ef9276fc69ed612a4dcce0a034: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:47.513029 containerd[1565]: time="2025-07-15T23:58:47.512981087Z" level=info msg="CreateContainer within sandbox \"6d710455531b6a079eda65bd09ec479a08d03f2d4114c0bd14804d1edbfc5186\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"53035db9cb540eba03251541d01e9a41dcdf75ef9276fc69ed612a4dcce0a034\"" Jul 15 23:58:47.514216 containerd[1565]: time="2025-07-15T23:58:47.513576777Z" level=info msg="StartContainer for \"53035db9cb540eba03251541d01e9a41dcdf75ef9276fc69ed612a4dcce0a034\"" Jul 15 23:58:47.515527 containerd[1565]: time="2025-07-15T23:58:47.515463279Z" level=info msg="connecting to shim 53035db9cb540eba03251541d01e9a41dcdf75ef9276fc69ed612a4dcce0a034" address="unix:///run/containerd/s/a7f6e668cb2560f4accaedaf84697faf63f67a90829d8c29fa7dac7e1430bbf2" protocol=ttrpc version=3 Jul 15 23:58:47.534145 containerd[1565]: time="2025-07-15T23:58:47.534076322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-q9gwb,Uid:145824d8-c26b-4edc-a9a6-8f2255eaab8d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6b53f4356cf920dc5a3012cadbfef5e8a288604ff667519ebd7d7150f5059f67\"" Jul 15 23:58:47.537126 containerd[1565]: time="2025-07-15T23:58:47.536910041Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 23:58:47.539546 systemd[1]: Started cri-containerd-53035db9cb540eba03251541d01e9a41dcdf75ef9276fc69ed612a4dcce0a034.scope - libcontainer container 53035db9cb540eba03251541d01e9a41dcdf75ef9276fc69ed612a4dcce0a034. Jul 15 23:58:47.591435 containerd[1565]: time="2025-07-15T23:58:47.591357389Z" level=info msg="StartContainer for \"53035db9cb540eba03251541d01e9a41dcdf75ef9276fc69ed612a4dcce0a034\" returns successfully" Jul 15 23:58:47.885131 kubelet[2702]: E0715 23:58:47.885074 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:47.885569 kubelet[2702]: E0715 23:58:47.885550 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:47.898576 kubelet[2702]: I0715 23:58:47.898280 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t9pc7" podStartSLOduration=0.898260333 podStartE2EDuration="898.260333ms" podCreationTimestamp="2025-07-15 23:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:58:47.897957939 +0000 UTC m=+6.146411400" watchObservedRunningTime="2025-07-15 23:58:47.898260333 +0000 UTC m=+6.146713794" Jul 15 23:58:48.886744 kubelet[2702]: E0715 23:58:48.886688 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:48.890780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386207713.mount: Deactivated successfully. Jul 15 23:58:48.917139 kubelet[2702]: E0715 23:58:48.917067 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:49.888742 kubelet[2702]: E0715 23:58:49.888702 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:49.905092 containerd[1565]: time="2025-07-15T23:58:49.905001396Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:49.905962 containerd[1565]: time="2025-07-15T23:58:49.905935676Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 15 23:58:49.907096 containerd[1565]: time="2025-07-15T23:58:49.907071710Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:49.909130 containerd[1565]: time="2025-07-15T23:58:49.909070838Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:49.909582 containerd[1565]: time="2025-07-15T23:58:49.909556158Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.37261062s" Jul 15 23:58:49.909582 containerd[1565]: time="2025-07-15T23:58:49.909583020Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 23:58:49.914502 containerd[1565]: time="2025-07-15T23:58:49.914449644Z" level=info msg="CreateContainer within sandbox \"6b53f4356cf920dc5a3012cadbfef5e8a288604ff667519ebd7d7150f5059f67\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 23:58:49.922982 containerd[1565]: time="2025-07-15T23:58:49.922930328Z" level=info msg="Container a6c0d71e92ab4934a4626792f46b8a8fd0b8e685078abc758d38e8fdc94fe2a8: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:49.929176 containerd[1565]: time="2025-07-15T23:58:49.929126543Z" level=info msg="CreateContainer within sandbox \"6b53f4356cf920dc5a3012cadbfef5e8a288604ff667519ebd7d7150f5059f67\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a6c0d71e92ab4934a4626792f46b8a8fd0b8e685078abc758d38e8fdc94fe2a8\"" Jul 15 23:58:49.929690 containerd[1565]: time="2025-07-15T23:58:49.929663811Z" level=info msg="StartContainer for \"a6c0d71e92ab4934a4626792f46b8a8fd0b8e685078abc758d38e8fdc94fe2a8\"" Jul 15 23:58:49.930586 containerd[1565]: time="2025-07-15T23:58:49.930561593Z" level=info msg="connecting to shim a6c0d71e92ab4934a4626792f46b8a8fd0b8e685078abc758d38e8fdc94fe2a8" address="unix:///run/containerd/s/9ee47fef6b9db7cf1e789ffb8a482eabfed219d39cf40952d345d5fce6e20dcb" protocol=ttrpc version=3 Jul 15 23:58:49.990564 systemd[1]: Started cri-containerd-a6c0d71e92ab4934a4626792f46b8a8fd0b8e685078abc758d38e8fdc94fe2a8.scope - libcontainer container a6c0d71e92ab4934a4626792f46b8a8fd0b8e685078abc758d38e8fdc94fe2a8. Jul 15 23:58:50.024927 containerd[1565]: time="2025-07-15T23:58:50.024881560Z" level=info msg="StartContainer for \"a6c0d71e92ab4934a4626792f46b8a8fd0b8e685078abc758d38e8fdc94fe2a8\" returns successfully" Jul 15 23:58:50.891834 kubelet[2702]: E0715 23:58:50.891790 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:51.733936 kubelet[2702]: E0715 23:58:51.733863 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:51.767490 kubelet[2702]: I0715 23:58:51.766692 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-q9gwb" podStartSLOduration=2.391904968 podStartE2EDuration="4.766671888s" podCreationTimestamp="2025-07-15 23:58:47 +0000 UTC" firstStartedPulling="2025-07-15 23:58:47.536139669 +0000 UTC m=+5.784593130" lastFinishedPulling="2025-07-15 23:58:49.910906589 +0000 UTC m=+8.159360050" observedRunningTime="2025-07-15 23:58:50.925513429 +0000 UTC m=+9.173966890" watchObservedRunningTime="2025-07-15 23:58:51.766671888 +0000 UTC m=+10.015125349" Jul 15 23:58:51.895276 kubelet[2702]: E0715 23:58:51.894933 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:55.637609 sudo[1768]: pam_unix(sudo:session): session closed for user root Jul 15 23:58:55.639322 sshd[1767]: Connection closed by 10.0.0.1 port 35530 Jul 15 23:58:55.639837 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:55.645986 systemd[1]: sshd@6-10.0.0.136:22-10.0.0.1:35530.service: Deactivated successfully. Jul 15 23:58:55.650089 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:58:55.650697 systemd[1]: session-7.scope: Consumed 5.980s CPU time, 230.8M memory peak. Jul 15 23:58:55.654990 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:58:55.657477 systemd-logind[1548]: Removed session 7. Jul 15 23:58:58.938448 systemd[1]: Created slice kubepods-besteffort-poddb675a96_f5f3_41ea_9eee_1e65b06866b8.slice - libcontainer container kubepods-besteffort-poddb675a96_f5f3_41ea_9eee_1e65b06866b8.slice. Jul 15 23:58:58.943597 kubelet[2702]: I0715 23:58:58.943534 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db675a96-f5f3-41ea-9eee-1e65b06866b8-tigera-ca-bundle\") pod \"calico-typha-8545bdb795-d5ks7\" (UID: \"db675a96-f5f3-41ea-9eee-1e65b06866b8\") " pod="calico-system/calico-typha-8545bdb795-d5ks7" Jul 15 23:58:58.943597 kubelet[2702]: I0715 23:58:58.943586 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/db675a96-f5f3-41ea-9eee-1e65b06866b8-typha-certs\") pod \"calico-typha-8545bdb795-d5ks7\" (UID: \"db675a96-f5f3-41ea-9eee-1e65b06866b8\") " pod="calico-system/calico-typha-8545bdb795-d5ks7" Jul 15 23:58:58.944095 kubelet[2702]: I0715 23:58:58.943613 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpwhw\" (UniqueName: \"kubernetes.io/projected/db675a96-f5f3-41ea-9eee-1e65b06866b8-kube-api-access-kpwhw\") pod \"calico-typha-8545bdb795-d5ks7\" (UID: \"db675a96-f5f3-41ea-9eee-1e65b06866b8\") " pod="calico-system/calico-typha-8545bdb795-d5ks7" Jul 15 23:58:59.074056 systemd[1]: Created slice kubepods-besteffort-pod20581c9a_a819_4690_9c03_4674ff86d556.slice - libcontainer container kubepods-besteffort-pod20581c9a_a819_4690_9c03_4674ff86d556.slice. Jul 15 23:58:59.145076 kubelet[2702]: I0715 23:58:59.144986 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20581c9a-a819-4690-9c03-4674ff86d556-lib-modules\") pod \"calico-node-4jckp\" (UID: \"20581c9a-a819-4690-9c03-4674ff86d556\") " pod="calico-system/calico-node-4jckp" Jul 15 23:58:59.145076 kubelet[2702]: I0715 23:58:59.145044 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8vcf\" (UniqueName: \"kubernetes.io/projected/20581c9a-a819-4690-9c03-4674ff86d556-kube-api-access-n8vcf\") pod \"calico-node-4jckp\" (UID: \"20581c9a-a819-4690-9c03-4674ff86d556\") " pod="calico-system/calico-node-4jckp" Jul 15 23:58:59.145323 kubelet[2702]: I0715 23:58:59.145145 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/20581c9a-a819-4690-9c03-4674ff86d556-cni-bin-dir\") pod \"calico-node-4jckp\" (UID: \"20581c9a-a819-4690-9c03-4674ff86d556\") " pod="calico-system/calico-node-4jckp" Jul 15 23:58:59.145323 kubelet[2702]: I0715 23:58:59.145185 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/20581c9a-a819-4690-9c03-4674ff86d556-policysync\") pod \"calico-node-4jckp\" (UID: \"20581c9a-a819-4690-9c03-4674ff86d556\") " pod="calico-system/calico-node-4jckp" Jul 15 23:58:59.145323 kubelet[2702]: I0715 23:58:59.145203 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/20581c9a-a819-4690-9c03-4674ff86d556-var-run-calico\") pod \"calico-node-4jckp\" (UID: \"20581c9a-a819-4690-9c03-4674ff86d556\") " pod="calico-system/calico-node-4jckp" Jul 15 23:58:59.145323 kubelet[2702]: I0715 23:58:59.145218 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20581c9a-a819-4690-9c03-4674ff86d556-xtables-lock\") pod \"calico-node-4jckp\" (UID: \"20581c9a-a819-4690-9c03-4674ff86d556\") " pod="calico-system/calico-node-4jckp" Jul 15 23:58:59.145323 kubelet[2702]: I0715 23:58:59.145236 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/20581c9a-a819-4690-9c03-4674ff86d556-cni-net-dir\") pod \"calico-node-4jckp\" (UID: \"20581c9a-a819-4690-9c03-4674ff86d556\") " pod="calico-system/calico-node-4jckp" Jul 15 23:58:59.145573 kubelet[2702]: I0715 23:58:59.145254 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/20581c9a-a819-4690-9c03-4674ff86d556-cni-log-dir\") pod \"calico-node-4jckp\" (UID: \"20581c9a-a819-4690-9c03-4674ff86d556\") " pod="calico-system/calico-node-4jckp" Jul 15 23:58:59.145573 kubelet[2702]: I0715 23:58:59.145273 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20581c9a-a819-4690-9c03-4674ff86d556-tigera-ca-bundle\") pod \"calico-node-4jckp\" (UID: \"20581c9a-a819-4690-9c03-4674ff86d556\") " pod="calico-system/calico-node-4jckp" Jul 15 23:58:59.145573 kubelet[2702]: I0715 23:58:59.145293 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/20581c9a-a819-4690-9c03-4674ff86d556-node-certs\") pod \"calico-node-4jckp\" (UID: \"20581c9a-a819-4690-9c03-4674ff86d556\") " pod="calico-system/calico-node-4jckp" Jul 15 23:58:59.145573 kubelet[2702]: I0715 23:58:59.145312 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/20581c9a-a819-4690-9c03-4674ff86d556-flexvol-driver-host\") pod \"calico-node-4jckp\" (UID: \"20581c9a-a819-4690-9c03-4674ff86d556\") " pod="calico-system/calico-node-4jckp" Jul 15 23:58:59.145573 kubelet[2702]: I0715 23:58:59.145330 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/20581c9a-a819-4690-9c03-4674ff86d556-var-lib-calico\") pod \"calico-node-4jckp\" (UID: \"20581c9a-a819-4690-9c03-4674ff86d556\") " pod="calico-system/calico-node-4jckp" Jul 15 23:58:59.168266 kubelet[2702]: E0715 23:58:59.167958 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krpv9" podUID="5a8939d7-2475-4d58-9e48-7fc1e896bab6" Jul 15 23:58:59.243988 kubelet[2702]: E0715 23:58:59.243824 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:58:59.244489 containerd[1565]: time="2025-07-15T23:58:59.244436592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8545bdb795-d5ks7,Uid:db675a96-f5f3-41ea-9eee-1e65b06866b8,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:59.246260 kubelet[2702]: I0715 23:58:59.246101 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5a8939d7-2475-4d58-9e48-7fc1e896bab6-socket-dir\") pod \"csi-node-driver-krpv9\" (UID: \"5a8939d7-2475-4d58-9e48-7fc1e896bab6\") " pod="calico-system/csi-node-driver-krpv9" Jul 15 23:58:59.246260 kubelet[2702]: I0715 23:58:59.246142 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5a8939d7-2475-4d58-9e48-7fc1e896bab6-varrun\") pod \"csi-node-driver-krpv9\" (UID: \"5a8939d7-2475-4d58-9e48-7fc1e896bab6\") " pod="calico-system/csi-node-driver-krpv9" Jul 15 23:58:59.246260 kubelet[2702]: I0715 23:58:59.246170 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svrv9\" (UniqueName: \"kubernetes.io/projected/5a8939d7-2475-4d58-9e48-7fc1e896bab6-kube-api-access-svrv9\") pod \"csi-node-driver-krpv9\" (UID: \"5a8939d7-2475-4d58-9e48-7fc1e896bab6\") " pod="calico-system/csi-node-driver-krpv9" Jul 15 23:58:59.246423 kubelet[2702]: I0715 23:58:59.246265 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a8939d7-2475-4d58-9e48-7fc1e896bab6-kubelet-dir\") pod \"csi-node-driver-krpv9\" (UID: \"5a8939d7-2475-4d58-9e48-7fc1e896bab6\") " pod="calico-system/csi-node-driver-krpv9" Jul 15 23:58:59.246423 kubelet[2702]: I0715 23:58:59.246347 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5a8939d7-2475-4d58-9e48-7fc1e896bab6-registration-dir\") pod \"csi-node-driver-krpv9\" (UID: \"5a8939d7-2475-4d58-9e48-7fc1e896bab6\") " pod="calico-system/csi-node-driver-krpv9" Jul 15 23:58:59.467724 kubelet[2702]: E0715 23:58:59.467688 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:59.467724 kubelet[2702]: W0715 23:58:59.467711 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:59.467866 kubelet[2702]: E0715 23:58:59.467752 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:59.677200 containerd[1565]: time="2025-07-15T23:58:59.677076927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4jckp,Uid:20581c9a-a819-4690-9c03-4674ff86d556,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:59.964535 containerd[1565]: time="2025-07-15T23:58:59.961923568Z" level=info msg="connecting to shim d954b44d6d6963037cc5331ff42d23b809d2a022db96da19f25a227da212e14a" address="unix:///run/containerd/s/8f3ed0d3d5c3b9f1ade9f086113993a185cc45105d23694f8055eef6696278a0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:59.969126 containerd[1565]: time="2025-07-15T23:58:59.969069663Z" level=info msg="connecting to shim 9f40352200eaf4e690cabc8da245f143245a4963c26f0af278f934fca4080e53" address="unix:///run/containerd/s/7ff341f606fb84e69422ee1c2b8af6dfe34065d660e44612a901cde1be9c9e29" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:00.001570 systemd[1]: Started cri-containerd-d954b44d6d6963037cc5331ff42d23b809d2a022db96da19f25a227da212e14a.scope - libcontainer container d954b44d6d6963037cc5331ff42d23b809d2a022db96da19f25a227da212e14a. Jul 15 23:59:00.006017 systemd[1]: Started cri-containerd-9f40352200eaf4e690cabc8da245f143245a4963c26f0af278f934fca4080e53.scope - libcontainer container 9f40352200eaf4e690cabc8da245f143245a4963c26f0af278f934fca4080e53. Jul 15 23:59:00.228726 containerd[1565]: time="2025-07-15T23:59:00.228571650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4jckp,Uid:20581c9a-a819-4690-9c03-4674ff86d556,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f40352200eaf4e690cabc8da245f143245a4963c26f0af278f934fca4080e53\"" Jul 15 23:59:00.230295 containerd[1565]: time="2025-07-15T23:59:00.230158702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 23:59:00.328150 containerd[1565]: time="2025-07-15T23:59:00.328094889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8545bdb795-d5ks7,Uid:db675a96-f5f3-41ea-9eee-1e65b06866b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"d954b44d6d6963037cc5331ff42d23b809d2a022db96da19f25a227da212e14a\"" Jul 15 23:59:00.328836 kubelet[2702]: E0715 23:59:00.328780 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:00.856722 kubelet[2702]: E0715 23:59:00.856633 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krpv9" podUID="5a8939d7-2475-4d58-9e48-7fc1e896bab6" Jul 15 23:59:01.934873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573416562.mount: Deactivated successfully. Jul 15 23:59:02.008808 containerd[1565]: time="2025-07-15T23:59:02.008726438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:02.009620 containerd[1565]: time="2025-07-15T23:59:02.009568795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Jul 15 23:59:02.010981 containerd[1565]: time="2025-07-15T23:59:02.010945778Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:02.013014 containerd[1565]: time="2025-07-15T23:59:02.012978757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:02.013666 containerd[1565]: time="2025-07-15T23:59:02.013604266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.783409165s" Jul 15 23:59:02.013716 containerd[1565]: time="2025-07-15T23:59:02.013662074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 23:59:02.014592 containerd[1565]: time="2025-07-15T23:59:02.014555146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 23:59:02.015943 containerd[1565]: time="2025-07-15T23:59:02.015893688Z" level=info msg="CreateContainer within sandbox \"9f40352200eaf4e690cabc8da245f143245a4963c26f0af278f934fca4080e53\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 23:59:02.027029 containerd[1565]: time="2025-07-15T23:59:02.026985941Z" level=info msg="Container df28260535b38ceb70a42c2bb05aa30832797cd5d5c919b03e6116df3832f8c5: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:02.036859 containerd[1565]: time="2025-07-15T23:59:02.036800379Z" level=info msg="CreateContainer within sandbox \"9f40352200eaf4e690cabc8da245f143245a4963c26f0af278f934fca4080e53\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"df28260535b38ceb70a42c2bb05aa30832797cd5d5c919b03e6116df3832f8c5\"" Jul 15 23:59:02.037484 containerd[1565]: time="2025-07-15T23:59:02.037272798Z" level=info msg="StartContainer for \"df28260535b38ceb70a42c2bb05aa30832797cd5d5c919b03e6116df3832f8c5\"" Jul 15 23:59:02.039150 containerd[1565]: time="2025-07-15T23:59:02.039046830Z" level=info msg="connecting to shim df28260535b38ceb70a42c2bb05aa30832797cd5d5c919b03e6116df3832f8c5" address="unix:///run/containerd/s/7ff341f606fb84e69422ee1c2b8af6dfe34065d660e44612a901cde1be9c9e29" protocol=ttrpc version=3 Jul 15 23:59:02.064646 systemd[1]: Started cri-containerd-df28260535b38ceb70a42c2bb05aa30832797cd5d5c919b03e6116df3832f8c5.scope - libcontainer container df28260535b38ceb70a42c2bb05aa30832797cd5d5c919b03e6116df3832f8c5. Jul 15 23:59:02.131284 systemd[1]: cri-containerd-df28260535b38ceb70a42c2bb05aa30832797cd5d5c919b03e6116df3832f8c5.scope: Deactivated successfully. Jul 15 23:59:02.133264 containerd[1565]: time="2025-07-15T23:59:02.133216581Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df28260535b38ceb70a42c2bb05aa30832797cd5d5c919b03e6116df3832f8c5\" id:\"df28260535b38ceb70a42c2bb05aa30832797cd5d5c919b03e6116df3832f8c5\" pid:3256 exited_at:{seconds:1752623942 nanos:132758058}" Jul 15 23:59:02.235442 containerd[1565]: time="2025-07-15T23:59:02.235320878Z" level=info msg="received exit event container_id:\"df28260535b38ceb70a42c2bb05aa30832797cd5d5c919b03e6116df3832f8c5\" id:\"df28260535b38ceb70a42c2bb05aa30832797cd5d5c919b03e6116df3832f8c5\" pid:3256 exited_at:{seconds:1752623942 nanos:132758058}" Jul 15 23:59:02.237642 containerd[1565]: time="2025-07-15T23:59:02.237594680Z" level=info msg="StartContainer for \"df28260535b38ceb70a42c2bb05aa30832797cd5d5c919b03e6116df3832f8c5\" returns successfully" Jul 15 23:59:02.858425 kubelet[2702]: E0715 23:59:02.857834 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krpv9" podUID="5a8939d7-2475-4d58-9e48-7fc1e896bab6" Jul 15 23:59:02.896323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df28260535b38ceb70a42c2bb05aa30832797cd5d5c919b03e6116df3832f8c5-rootfs.mount: Deactivated successfully. Jul 15 23:59:04.857575 kubelet[2702]: E0715 23:59:04.857459 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krpv9" podUID="5a8939d7-2475-4d58-9e48-7fc1e896bab6" Jul 15 23:59:06.656464 containerd[1565]: time="2025-07-15T23:59:06.656364184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:06.658307 containerd[1565]: time="2025-07-15T23:59:06.657790417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33740523" Jul 15 23:59:06.660547 containerd[1565]: time="2025-07-15T23:59:06.660472273Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:06.663466 containerd[1565]: time="2025-07-15T23:59:06.663352021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:06.664201 containerd[1565]: time="2025-07-15T23:59:06.664139844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 4.64954918s" Jul 15 23:59:06.664201 containerd[1565]: time="2025-07-15T23:59:06.664195759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 23:59:06.665297 containerd[1565]: time="2025-07-15T23:59:06.665216209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 23:59:06.678368 containerd[1565]: time="2025-07-15T23:59:06.678253109Z" level=info msg="CreateContainer within sandbox \"d954b44d6d6963037cc5331ff42d23b809d2a022db96da19f25a227da212e14a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 23:59:06.692619 containerd[1565]: time="2025-07-15T23:59:06.692533849Z" level=info msg="Container 49fbdf216c7c16ec11d80da08dd196db94b7347ec1633ea671973987862c692a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:06.704496 containerd[1565]: time="2025-07-15T23:59:06.704423792Z" level=info msg="CreateContainer within sandbox \"d954b44d6d6963037cc5331ff42d23b809d2a022db96da19f25a227da212e14a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"49fbdf216c7c16ec11d80da08dd196db94b7347ec1633ea671973987862c692a\"" Jul 15 23:59:06.705153 containerd[1565]: time="2025-07-15T23:59:06.705098001Z" level=info msg="StartContainer for \"49fbdf216c7c16ec11d80da08dd196db94b7347ec1633ea671973987862c692a\"" Jul 15 23:59:06.706589 containerd[1565]: time="2025-07-15T23:59:06.706545855Z" level=info msg="connecting to shim 49fbdf216c7c16ec11d80da08dd196db94b7347ec1633ea671973987862c692a" address="unix:///run/containerd/s/8f3ed0d3d5c3b9f1ade9f086113993a185cc45105d23694f8055eef6696278a0" protocol=ttrpc version=3 Jul 15 23:59:06.735667 systemd[1]: Started cri-containerd-49fbdf216c7c16ec11d80da08dd196db94b7347ec1633ea671973987862c692a.scope - libcontainer container 49fbdf216c7c16ec11d80da08dd196db94b7347ec1633ea671973987862c692a. Jul 15 23:59:06.793013 containerd[1565]: time="2025-07-15T23:59:06.792961864Z" level=info msg="StartContainer for \"49fbdf216c7c16ec11d80da08dd196db94b7347ec1633ea671973987862c692a\" returns successfully" Jul 15 23:59:06.857797 kubelet[2702]: E0715 23:59:06.857633 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krpv9" podUID="5a8939d7-2475-4d58-9e48-7fc1e896bab6" Jul 15 23:59:06.934484 kubelet[2702]: E0715 23:59:06.933838 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:07.934610 kubelet[2702]: I0715 23:59:07.934560 2702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:59:07.935149 kubelet[2702]: E0715 23:59:07.934925 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:08.857424 kubelet[2702]: E0715 23:59:08.857342 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krpv9" podUID="5a8939d7-2475-4d58-9e48-7fc1e896bab6" Jul 15 23:59:10.856533 kubelet[2702]: E0715 23:59:10.856459 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krpv9" podUID="5a8939d7-2475-4d58-9e48-7fc1e896bab6" Jul 15 23:59:12.715352 containerd[1565]: time="2025-07-15T23:59:12.715285633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:12.716166 containerd[1565]: time="2025-07-15T23:59:12.716112276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 15 23:59:12.717451 containerd[1565]: time="2025-07-15T23:59:12.717400177Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:12.719750 containerd[1565]: time="2025-07-15T23:59:12.719718654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:12.720504 containerd[1565]: time="2025-07-15T23:59:12.720404633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 6.055144691s" Jul 15 23:59:12.720504 containerd[1565]: time="2025-07-15T23:59:12.720441363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 23:59:12.722959 containerd[1565]: time="2025-07-15T23:59:12.722912085Z" level=info msg="CreateContainer within sandbox \"9f40352200eaf4e690cabc8da245f143245a4963c26f0af278f934fca4080e53\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 23:59:12.733609 containerd[1565]: time="2025-07-15T23:59:12.733562551Z" level=info msg="Container 70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:12.743776 containerd[1565]: time="2025-07-15T23:59:12.743733507Z" level=info msg="CreateContainer within sandbox \"9f40352200eaf4e690cabc8da245f143245a4963c26f0af278f934fca4080e53\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa\"" Jul 15 23:59:12.744213 containerd[1565]: time="2025-07-15T23:59:12.744174315Z" level=info msg="StartContainer for \"70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa\"" Jul 15 23:59:12.745504 containerd[1565]: time="2025-07-15T23:59:12.745473116Z" level=info msg="connecting to shim 70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa" address="unix:///run/containerd/s/7ff341f606fb84e69422ee1c2b8af6dfe34065d660e44612a901cde1be9c9e29" protocol=ttrpc version=3 Jul 15 23:59:12.780706 systemd[1]: Started cri-containerd-70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa.scope - libcontainer container 70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa. Jul 15 23:59:12.835351 containerd[1565]: time="2025-07-15T23:59:12.835281598Z" level=info msg="StartContainer for \"70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa\" returns successfully" Jul 15 23:59:12.860823 kubelet[2702]: E0715 23:59:12.857356 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krpv9" podUID="5a8939d7-2475-4d58-9e48-7fc1e896bab6" Jul 15 23:59:12.973123 kubelet[2702]: I0715 23:59:12.972940 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8545bdb795-d5ks7" podStartSLOduration=8.637050177999999 podStartE2EDuration="14.972904922s" podCreationTimestamp="2025-07-15 23:58:58 +0000 UTC" firstStartedPulling="2025-07-15 23:59:00.32917262 +0000 UTC m=+18.577626081" lastFinishedPulling="2025-07-15 23:59:06.665027364 +0000 UTC m=+24.913480825" observedRunningTime="2025-07-15 23:59:06.95468057 +0000 UTC m=+25.203134052" watchObservedRunningTime="2025-07-15 23:59:12.972904922 +0000 UTC m=+31.221358383" Jul 15 23:59:14.856853 kubelet[2702]: E0715 23:59:14.856762 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krpv9" podUID="5a8939d7-2475-4d58-9e48-7fc1e896bab6" Jul 15 23:59:14.967353 containerd[1565]: time="2025-07-15T23:59:14.967285565Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:59:14.973229 systemd[1]: cri-containerd-70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa.scope: Deactivated successfully. Jul 15 23:59:14.973705 containerd[1565]: time="2025-07-15T23:59:14.973204786Z" level=info msg="received exit event container_id:\"70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa\" id:\"70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa\" pid:3360 exited_at:{seconds:1752623954 nanos:972894594}" Jul 15 23:59:14.973705 containerd[1565]: time="2025-07-15T23:59:14.973271872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa\" id:\"70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa\" pid:3360 exited_at:{seconds:1752623954 nanos:972894594}" Jul 15 23:59:14.974245 systemd[1]: cri-containerd-70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa.scope: Consumed 707ms CPU time, 175.1M memory peak, 3M read from disk, 171.2M written to disk. Jul 15 23:59:15.000843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70a98514491c13eb08fc79a0fef601f23d5dfb81e14ac7b192479cc53d44d6fa-rootfs.mount: Deactivated successfully. Jul 15 23:59:15.002554 kubelet[2702]: I0715 23:59:15.001933 2702 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 23:59:15.099700 systemd[1]: Created slice kubepods-burstable-pod7d7f1199_1547_4be5_8aa6_0ec801a111c9.slice - libcontainer container kubepods-burstable-pod7d7f1199_1547_4be5_8aa6_0ec801a111c9.slice. Jul 15 23:59:15.120826 systemd[1]: Created slice kubepods-burstable-pod7f2b822f_8c23_4f99_836e_74e8efcaaf0b.slice - libcontainer container kubepods-burstable-pod7f2b822f_8c23_4f99_836e_74e8efcaaf0b.slice. Jul 15 23:59:15.129555 systemd[1]: Created slice kubepods-besteffort-pod60ea8394_7c5f_487c_b841_fc0d13d92798.slice - libcontainer container kubepods-besteffort-pod60ea8394_7c5f_487c_b841_fc0d13d92798.slice. Jul 15 23:59:15.141170 systemd[1]: Created slice kubepods-besteffort-pod20f53838_cc61_41d5_95ec_cc1e15cdb769.slice - libcontainer container kubepods-besteffort-pod20f53838_cc61_41d5_95ec_cc1e15cdb769.slice. Jul 15 23:59:15.150085 systemd[1]: Created slice kubepods-besteffort-pod4f0bea4f_a598_42a6_9a83_eca16f1aebc6.slice - libcontainer container kubepods-besteffort-pod4f0bea4f_a598_42a6_9a83_eca16f1aebc6.slice. Jul 15 23:59:15.155574 kubelet[2702]: I0715 23:59:15.155168 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f0bea4f-a598-42a6-9a83-eca16f1aebc6-config\") pod \"goldmane-768f4c5c69-m7vc8\" (UID: \"4f0bea4f-a598-42a6-9a83-eca16f1aebc6\") " pod="calico-system/goldmane-768f4c5c69-m7vc8" Jul 15 23:59:15.155574 kubelet[2702]: I0715 23:59:15.155217 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f0bea4f-a598-42a6-9a83-eca16f1aebc6-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-m7vc8\" (UID: \"4f0bea4f-a598-42a6-9a83-eca16f1aebc6\") " pod="calico-system/goldmane-768f4c5c69-m7vc8" Jul 15 23:59:15.155574 kubelet[2702]: I0715 23:59:15.155251 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c25d4aa9-c262-4230-8df3-19ee05cb967f-whisker-backend-key-pair\") pod \"whisker-77686895cb-fx2dk\" (UID: \"c25d4aa9-c262-4230-8df3-19ee05cb967f\") " pod="calico-system/whisker-77686895cb-fx2dk" Jul 15 23:59:15.155574 kubelet[2702]: I0715 23:59:15.155274 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g796v\" (UniqueName: \"kubernetes.io/projected/4f0bea4f-a598-42a6-9a83-eca16f1aebc6-kube-api-access-g796v\") pod \"goldmane-768f4c5c69-m7vc8\" (UID: \"4f0bea4f-a598-42a6-9a83-eca16f1aebc6\") " pod="calico-system/goldmane-768f4c5c69-m7vc8" Jul 15 23:59:15.155574 kubelet[2702]: I0715 23:59:15.155297 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/20f53838-cc61-41d5-95ec-cc1e15cdb769-calico-apiserver-certs\") pod \"calico-apiserver-86f64d6979-2m69b\" (UID: \"20f53838-cc61-41d5-95ec-cc1e15cdb769\") " pod="calico-apiserver/calico-apiserver-86f64d6979-2m69b" Jul 15 23:59:15.155941 kubelet[2702]: I0715 23:59:15.155316 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jc7q\" (UniqueName: \"kubernetes.io/projected/c25d4aa9-c262-4230-8df3-19ee05cb967f-kube-api-access-9jc7q\") pod \"whisker-77686895cb-fx2dk\" (UID: \"c25d4aa9-c262-4230-8df3-19ee05cb967f\") " pod="calico-system/whisker-77686895cb-fx2dk" Jul 15 23:59:15.155941 kubelet[2702]: I0715 23:59:15.155335 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4f0bea4f-a598-42a6-9a83-eca16f1aebc6-goldmane-key-pair\") pod \"goldmane-768f4c5c69-m7vc8\" (UID: \"4f0bea4f-a598-42a6-9a83-eca16f1aebc6\") " pod="calico-system/goldmane-768f4c5c69-m7vc8" Jul 15 23:59:15.155941 kubelet[2702]: I0715 23:59:15.155356 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d7f1199-1547-4be5-8aa6-0ec801a111c9-config-volume\") pod \"coredns-668d6bf9bc-gm476\" (UID: \"7d7f1199-1547-4be5-8aa6-0ec801a111c9\") " pod="kube-system/coredns-668d6bf9bc-gm476" Jul 15 23:59:15.155941 kubelet[2702]: I0715 23:59:15.155414 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60ea8394-7c5f-487c-b841-fc0d13d92798-tigera-ca-bundle\") pod \"calico-kube-controllers-86d9bbf597-qj6jm\" (UID: \"60ea8394-7c5f-487c-b841-fc0d13d92798\") " pod="calico-system/calico-kube-controllers-86d9bbf597-qj6jm" Jul 15 23:59:15.155941 kubelet[2702]: I0715 23:59:15.155437 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6jkg\" (UniqueName: \"kubernetes.io/projected/7d7f1199-1547-4be5-8aa6-0ec801a111c9-kube-api-access-s6jkg\") pod \"coredns-668d6bf9bc-gm476\" (UID: \"7d7f1199-1547-4be5-8aa6-0ec801a111c9\") " pod="kube-system/coredns-668d6bf9bc-gm476" Jul 15 23:59:15.156160 kubelet[2702]: I0715 23:59:15.155462 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c56ld\" (UniqueName: \"kubernetes.io/projected/c5d6104a-41a6-410f-928e-c8788c0d34a0-kube-api-access-c56ld\") pod \"calico-apiserver-86f64d6979-5kwll\" (UID: \"c5d6104a-41a6-410f-928e-c8788c0d34a0\") " pod="calico-apiserver/calico-apiserver-86f64d6979-5kwll" Jul 15 23:59:15.156160 kubelet[2702]: I0715 23:59:15.155482 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f2b822f-8c23-4f99-836e-74e8efcaaf0b-config-volume\") pod \"coredns-668d6bf9bc-hjxmw\" (UID: \"7f2b822f-8c23-4f99-836e-74e8efcaaf0b\") " pod="kube-system/coredns-668d6bf9bc-hjxmw" Jul 15 23:59:15.156160 kubelet[2702]: I0715 23:59:15.155503 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c5d6104a-41a6-410f-928e-c8788c0d34a0-calico-apiserver-certs\") pod \"calico-apiserver-86f64d6979-5kwll\" (UID: \"c5d6104a-41a6-410f-928e-c8788c0d34a0\") " pod="calico-apiserver/calico-apiserver-86f64d6979-5kwll" Jul 15 23:59:15.156160 kubelet[2702]: I0715 23:59:15.155529 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xzlk\" (UniqueName: \"kubernetes.io/projected/60ea8394-7c5f-487c-b841-fc0d13d92798-kube-api-access-8xzlk\") pod \"calico-kube-controllers-86d9bbf597-qj6jm\" (UID: \"60ea8394-7c5f-487c-b841-fc0d13d92798\") " pod="calico-system/calico-kube-controllers-86d9bbf597-qj6jm" Jul 15 23:59:15.156160 kubelet[2702]: I0715 23:59:15.155551 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c25d4aa9-c262-4230-8df3-19ee05cb967f-whisker-ca-bundle\") pod \"whisker-77686895cb-fx2dk\" (UID: \"c25d4aa9-c262-4230-8df3-19ee05cb967f\") " pod="calico-system/whisker-77686895cb-fx2dk" Jul 15 23:59:15.158678 kubelet[2702]: I0715 23:59:15.155572 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rsgv\" (UniqueName: \"kubernetes.io/projected/20f53838-cc61-41d5-95ec-cc1e15cdb769-kube-api-access-8rsgv\") pod \"calico-apiserver-86f64d6979-2m69b\" (UID: \"20f53838-cc61-41d5-95ec-cc1e15cdb769\") " pod="calico-apiserver/calico-apiserver-86f64d6979-2m69b" Jul 15 23:59:15.158678 kubelet[2702]: I0715 23:59:15.155592 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjhk8\" (UniqueName: \"kubernetes.io/projected/7f2b822f-8c23-4f99-836e-74e8efcaaf0b-kube-api-access-bjhk8\") pod \"coredns-668d6bf9bc-hjxmw\" (UID: \"7f2b822f-8c23-4f99-836e-74e8efcaaf0b\") " pod="kube-system/coredns-668d6bf9bc-hjxmw" Jul 15 23:59:15.164514 systemd[1]: Created slice kubepods-besteffort-podc5d6104a_41a6_410f_928e_c8788c0d34a0.slice - libcontainer container kubepods-besteffort-podc5d6104a_41a6_410f_928e_c8788c0d34a0.slice. Jul 15 23:59:15.170843 systemd[1]: Created slice kubepods-besteffort-podc25d4aa9_c262_4230_8df3_19ee05cb967f.slice - libcontainer container kubepods-besteffort-podc25d4aa9_c262_4230_8df3_19ee05cb967f.slice. Jul 15 23:59:15.414210 kubelet[2702]: E0715 23:59:15.414049 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:15.414740 containerd[1565]: time="2025-07-15T23:59:15.414653146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gm476,Uid:7d7f1199-1547-4be5-8aa6-0ec801a111c9,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:15.425487 kubelet[2702]: E0715 23:59:15.425377 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:15.425946 containerd[1565]: time="2025-07-15T23:59:15.425892148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hjxmw,Uid:7f2b822f-8c23-4f99-836e-74e8efcaaf0b,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:15.435957 containerd[1565]: time="2025-07-15T23:59:15.435852459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d9bbf597-qj6jm,Uid:60ea8394-7c5f-487c-b841-fc0d13d92798,Namespace:calico-system,Attempt:0,}" Jul 15 23:59:15.447478 containerd[1565]: time="2025-07-15T23:59:15.447430519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f64d6979-2m69b,Uid:20f53838-cc61-41d5-95ec-cc1e15cdb769,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:59:15.460882 containerd[1565]: time="2025-07-15T23:59:15.460825351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m7vc8,Uid:4f0bea4f-a598-42a6-9a83-eca16f1aebc6,Namespace:calico-system,Attempt:0,}" Jul 15 23:59:15.469901 containerd[1565]: time="2025-07-15T23:59:15.469843923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f64d6979-5kwll,Uid:c5d6104a-41a6-410f-928e-c8788c0d34a0,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:59:15.477251 containerd[1565]: time="2025-07-15T23:59:15.477192558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77686895cb-fx2dk,Uid:c25d4aa9-c262-4230-8df3-19ee05cb967f,Namespace:calico-system,Attempt:0,}" Jul 15 23:59:15.604422 containerd[1565]: time="2025-07-15T23:59:15.603727587Z" level=error msg="Failed to destroy network for sandbox \"55186337516f0eaccb006bd9347731d88468bb871d1ab2e77aa6de58ef88dd92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.605603 containerd[1565]: time="2025-07-15T23:59:15.605565038Z" level=error msg="Failed to destroy network for sandbox \"4edcfabb33cf37cdf57fa4d9f6bf910062905ec4ad3c2a5764abe9abfefe7b94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.613730 containerd[1565]: time="2025-07-15T23:59:15.613665866Z" level=error msg="Failed to destroy network for sandbox \"70e7a26aa43c8260d231b44ae161da0d42b5128bc453d0cdce05c78df41dfcc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.614870 containerd[1565]: time="2025-07-15T23:59:15.614836735Z" level=error msg="Failed to destroy network for sandbox \"0b95bae7372247f43c499589b68ac6407f396efc92342837c544a3ee7811f8d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.615095 containerd[1565]: time="2025-07-15T23:59:15.615064583Z" level=error msg="Failed to destroy network for sandbox \"42df911b2fbb8f3faf944cf628e9d5bae9a88e9c6dfdf34f1e27ac4585502f0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.616648 containerd[1565]: time="2025-07-15T23:59:15.616615898Z" level=error msg="Failed to destroy network for sandbox \"eaa119243b2e3f47468b70a970761e9c533fe998b097ecd3212abbb479d372cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.617304 containerd[1565]: time="2025-07-15T23:59:15.617268374Z" level=error msg="Failed to destroy network for sandbox \"689ac89bc86b1d8f8fcb3d0ce52c1c77d21c8e11eca314b3fa17747fd7b3418d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.655127 containerd[1565]: time="2025-07-15T23:59:15.655041161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d9bbf597-qj6jm,Uid:60ea8394-7c5f-487c-b841-fc0d13d92798,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4edcfabb33cf37cdf57fa4d9f6bf910062905ec4ad3c2a5764abe9abfefe7b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.655465 containerd[1565]: time="2025-07-15T23:59:15.655099090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m7vc8,Uid:4f0bea4f-a598-42a6-9a83-eca16f1aebc6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e7a26aa43c8260d231b44ae161da0d42b5128bc453d0cdce05c78df41dfcc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.655558 containerd[1565]: time="2025-07-15T23:59:15.655040700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hjxmw,Uid:7f2b822f-8c23-4f99-836e-74e8efcaaf0b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55186337516f0eaccb006bd9347731d88468bb871d1ab2e77aa6de58ef88dd92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.655558 containerd[1565]: time="2025-07-15T23:59:15.655059886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77686895cb-fx2dk,Uid:c25d4aa9-c262-4230-8df3-19ee05cb967f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b95bae7372247f43c499589b68ac6407f396efc92342837c544a3ee7811f8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.656673 containerd[1565]: time="2025-07-15T23:59:15.656615408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f64d6979-2m69b,Uid:20f53838-cc61-41d5-95ec-cc1e15cdb769,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42df911b2fbb8f3faf944cf628e9d5bae9a88e9c6dfdf34f1e27ac4585502f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.657712 containerd[1565]: time="2025-07-15T23:59:15.657640273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gm476,Uid:7d7f1199-1547-4be5-8aa6-0ec801a111c9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaa119243b2e3f47468b70a970761e9c533fe998b097ecd3212abbb479d372cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.659296 containerd[1565]: time="2025-07-15T23:59:15.659222525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f64d6979-5kwll,Uid:c5d6104a-41a6-410f-928e-c8788c0d34a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"689ac89bc86b1d8f8fcb3d0ce52c1c77d21c8e11eca314b3fa17747fd7b3418d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.670444 kubelet[2702]: E0715 23:59:15.669992 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"689ac89bc86b1d8f8fcb3d0ce52c1c77d21c8e11eca314b3fa17747fd7b3418d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.670444 kubelet[2702]: E0715 23:59:15.670034 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b95bae7372247f43c499589b68ac6407f396efc92342837c544a3ee7811f8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.670444 kubelet[2702]: E0715 23:59:15.670058 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaa119243b2e3f47468b70a970761e9c533fe998b097ecd3212abbb479d372cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.670444 kubelet[2702]: E0715 23:59:15.670079 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b95bae7372247f43c499589b68ac6407f396efc92342837c544a3ee7811f8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77686895cb-fx2dk" Jul 15 23:59:15.671182 kubelet[2702]: E0715 23:59:15.670067 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42df911b2fbb8f3faf944cf628e9d5bae9a88e9c6dfdf34f1e27ac4585502f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.671182 kubelet[2702]: E0715 23:59:15.670104 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b95bae7372247f43c499589b68ac6407f396efc92342837c544a3ee7811f8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77686895cb-fx2dk" Jul 15 23:59:15.671182 kubelet[2702]: E0715 23:59:15.670159 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42df911b2fbb8f3faf944cf628e9d5bae9a88e9c6dfdf34f1e27ac4585502f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f64d6979-2m69b" Jul 15 23:59:15.671182 kubelet[2702]: E0715 23:59:15.670079 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"689ac89bc86b1d8f8fcb3d0ce52c1c77d21c8e11eca314b3fa17747fd7b3418d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f64d6979-5kwll" Jul 15 23:59:15.671318 kubelet[2702]: E0715 23:59:15.670170 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77686895cb-fx2dk_calico-system(c25d4aa9-c262-4230-8df3-19ee05cb967f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77686895cb-fx2dk_calico-system(c25d4aa9-c262-4230-8df3-19ee05cb967f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b95bae7372247f43c499589b68ac6407f396efc92342837c544a3ee7811f8d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77686895cb-fx2dk" podUID="c25d4aa9-c262-4230-8df3-19ee05cb967f" Jul 15 23:59:15.671318 kubelet[2702]: E0715 23:59:15.670189 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42df911b2fbb8f3faf944cf628e9d5bae9a88e9c6dfdf34f1e27ac4585502f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f64d6979-2m69b" Jul 15 23:59:15.671318 kubelet[2702]: E0715 23:59:15.670196 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"689ac89bc86b1d8f8fcb3d0ce52c1c77d21c8e11eca314b3fa17747fd7b3418d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f64d6979-5kwll" Jul 15 23:59:15.671484 kubelet[2702]: E0715 23:59:15.670247 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f64d6979-2m69b_calico-apiserver(20f53838-cc61-41d5-95ec-cc1e15cdb769)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f64d6979-2m69b_calico-apiserver(20f53838-cc61-41d5-95ec-cc1e15cdb769)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42df911b2fbb8f3faf944cf628e9d5bae9a88e9c6dfdf34f1e27ac4585502f0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f64d6979-2m69b" podUID="20f53838-cc61-41d5-95ec-cc1e15cdb769" Jul 15 23:59:15.671484 kubelet[2702]: E0715 23:59:15.670268 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f64d6979-5kwll_calico-apiserver(c5d6104a-41a6-410f-928e-c8788c0d34a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f64d6979-5kwll_calico-apiserver(c5d6104a-41a6-410f-928e-c8788c0d34a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"689ac89bc86b1d8f8fcb3d0ce52c1c77d21c8e11eca314b3fa17747fd7b3418d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f64d6979-5kwll" podUID="c5d6104a-41a6-410f-928e-c8788c0d34a0" Jul 15 23:59:15.671612 kubelet[2702]: E0715 23:59:15.670093 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaa119243b2e3f47468b70a970761e9c533fe998b097ecd3212abbb479d372cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gm476" Jul 15 23:59:15.671612 kubelet[2702]: E0715 23:59:15.669985 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4edcfabb33cf37cdf57fa4d9f6bf910062905ec4ad3c2a5764abe9abfefe7b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.671612 kubelet[2702]: E0715 23:59:15.670299 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaa119243b2e3f47468b70a970761e9c533fe998b097ecd3212abbb479d372cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gm476" Jul 15 23:59:15.671612 kubelet[2702]: E0715 23:59:15.670314 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4edcfabb33cf37cdf57fa4d9f6bf910062905ec4ad3c2a5764abe9abfefe7b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d9bbf597-qj6jm" Jul 15 23:59:15.671740 kubelet[2702]: E0715 23:59:15.670333 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4edcfabb33cf37cdf57fa4d9f6bf910062905ec4ad3c2a5764abe9abfefe7b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d9bbf597-qj6jm" Jul 15 23:59:15.671740 kubelet[2702]: E0715 23:59:15.670334 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gm476_kube-system(7d7f1199-1547-4be5-8aa6-0ec801a111c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gm476_kube-system(7d7f1199-1547-4be5-8aa6-0ec801a111c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eaa119243b2e3f47468b70a970761e9c533fe998b097ecd3212abbb479d372cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gm476" podUID="7d7f1199-1547-4be5-8aa6-0ec801a111c9" Jul 15 23:59:15.671740 kubelet[2702]: E0715 23:59:15.670008 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55186337516f0eaccb006bd9347731d88468bb871d1ab2e77aa6de58ef88dd92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.671846 kubelet[2702]: E0715 23:59:15.670375 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55186337516f0eaccb006bd9347731d88468bb871d1ab2e77aa6de58ef88dd92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hjxmw" Jul 15 23:59:15.671846 kubelet[2702]: E0715 23:59:15.670362 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d9bbf597-qj6jm_calico-system(60ea8394-7c5f-487c-b841-fc0d13d92798)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d9bbf597-qj6jm_calico-system(60ea8394-7c5f-487c-b841-fc0d13d92798)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4edcfabb33cf37cdf57fa4d9f6bf910062905ec4ad3c2a5764abe9abfefe7b94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d9bbf597-qj6jm" podUID="60ea8394-7c5f-487c-b841-fc0d13d92798" Jul 15 23:59:15.671846 kubelet[2702]: E0715 23:59:15.670421 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55186337516f0eaccb006bd9347731d88468bb871d1ab2e77aa6de58ef88dd92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hjxmw" Jul 15 23:59:15.671964 kubelet[2702]: E0715 23:59:15.670453 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hjxmw_kube-system(7f2b822f-8c23-4f99-836e-74e8efcaaf0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hjxmw_kube-system(7f2b822f-8c23-4f99-836e-74e8efcaaf0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55186337516f0eaccb006bd9347731d88468bb871d1ab2e77aa6de58ef88dd92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hjxmw" podUID="7f2b822f-8c23-4f99-836e-74e8efcaaf0b" Jul 15 23:59:15.671964 kubelet[2702]: E0715 23:59:15.670038 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e7a26aa43c8260d231b44ae161da0d42b5128bc453d0cdce05c78df41dfcc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:15.671964 kubelet[2702]: E0715 23:59:15.670508 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e7a26aa43c8260d231b44ae161da0d42b5128bc453d0cdce05c78df41dfcc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-m7vc8" Jul 15 23:59:15.672076 kubelet[2702]: E0715 23:59:15.670526 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e7a26aa43c8260d231b44ae161da0d42b5128bc453d0cdce05c78df41dfcc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-m7vc8" Jul 15 23:59:15.672076 kubelet[2702]: E0715 23:59:15.670549 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-m7vc8_calico-system(4f0bea4f-a598-42a6-9a83-eca16f1aebc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-m7vc8_calico-system(4f0bea4f-a598-42a6-9a83-eca16f1aebc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70e7a26aa43c8260d231b44ae161da0d42b5128bc453d0cdce05c78df41dfcc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-m7vc8" podUID="4f0bea4f-a598-42a6-9a83-eca16f1aebc6" Jul 15 23:59:15.962019 containerd[1565]: time="2025-07-15T23:59:15.961956309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 23:59:16.865861 systemd[1]: Created slice kubepods-besteffort-pod5a8939d7_2475_4d58_9e48_7fc1e896bab6.slice - libcontainer container kubepods-besteffort-pod5a8939d7_2475_4d58_9e48_7fc1e896bab6.slice. Jul 15 23:59:16.868948 containerd[1565]: time="2025-07-15T23:59:16.868881531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krpv9,Uid:5a8939d7-2475-4d58-9e48-7fc1e896bab6,Namespace:calico-system,Attempt:0,}" Jul 15 23:59:16.931251 containerd[1565]: time="2025-07-15T23:59:16.931164105Z" level=error msg="Failed to destroy network for sandbox \"b3fa981a1dc5e794a4177a3ded054b34efb574b48d2bc6a77e6d30bebe5d26b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:16.933681 systemd[1]: run-netns-cni\x2dcd1065a8\x2d2b41\x2df024\x2d6c32\x2d4b0ebf754071.mount: Deactivated successfully. Jul 15 23:59:17.207584 containerd[1565]: time="2025-07-15T23:59:17.207497260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krpv9,Uid:5a8939d7-2475-4d58-9e48-7fc1e896bab6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3fa981a1dc5e794a4177a3ded054b34efb574b48d2bc6a77e6d30bebe5d26b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:17.207956 kubelet[2702]: E0715 23:59:17.207901 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3fa981a1dc5e794a4177a3ded054b34efb574b48d2bc6a77e6d30bebe5d26b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:17.208500 kubelet[2702]: E0715 23:59:17.207977 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3fa981a1dc5e794a4177a3ded054b34efb574b48d2bc6a77e6d30bebe5d26b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krpv9" Jul 15 23:59:17.208500 kubelet[2702]: E0715 23:59:17.207999 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3fa981a1dc5e794a4177a3ded054b34efb574b48d2bc6a77e6d30bebe5d26b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krpv9" Jul 15 23:59:17.208500 kubelet[2702]: E0715 23:59:17.208056 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-krpv9_calico-system(5a8939d7-2475-4d58-9e48-7fc1e896bab6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-krpv9_calico-system(5a8939d7-2475-4d58-9e48-7fc1e896bab6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3fa981a1dc5e794a4177a3ded054b34efb574b48d2bc6a77e6d30bebe5d26b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-krpv9" podUID="5a8939d7-2475-4d58-9e48-7fc1e896bab6" Jul 15 23:59:20.047247 kubelet[2702]: I0715 23:59:20.047177 2702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:59:20.047935 kubelet[2702]: E0715 23:59:20.047863 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:20.979401 kubelet[2702]: E0715 23:59:20.979329 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:26.373405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2465411526.mount: Deactivated successfully. Jul 15 23:59:28.055422 kubelet[2702]: E0715 23:59:28.055287 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:28.057426 kubelet[2702]: E0715 23:59:28.057342 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:28.057612 containerd[1565]: time="2025-07-15T23:59:28.057417374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hjxmw,Uid:7f2b822f-8c23-4f99-836e-74e8efcaaf0b,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:28.058797 containerd[1565]: time="2025-07-15T23:59:28.058330898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f64d6979-5kwll,Uid:c5d6104a-41a6-410f-928e-c8788c0d34a0,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:59:28.058797 containerd[1565]: time="2025-07-15T23:59:28.058508671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gm476,Uid:7d7f1199-1547-4be5-8aa6-0ec801a111c9,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:28.330194 containerd[1565]: time="2025-07-15T23:59:28.330015786Z" level=error msg="Failed to destroy network for sandbox \"461958c64378aeb033a7344d91d29e1e6f0fbea264801ed04839a51e2dface16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:28.333734 systemd[1]: run-netns-cni\x2ddd09d402\x2de61a\x2d521c\x2d10f3\x2dd06272d60993.mount: Deactivated successfully. Jul 15 23:59:28.399654 containerd[1565]: time="2025-07-15T23:59:28.399581453Z" level=error msg="Failed to destroy network for sandbox \"101a735b2ef4ae80f9254aaf9f0d533c984300ab49176b8a41cb8794b509829d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:28.402772 systemd[1]: run-netns-cni\x2d6a6f6fc5\x2d6848\x2dbaca\x2d2984\x2d1892e92e9492.mount: Deactivated successfully. Jul 15 23:59:28.461946 containerd[1565]: time="2025-07-15T23:59:28.461868865Z" level=error msg="Failed to destroy network for sandbox \"22143f95243fbd3f5889f0b3bb8c72400447776c360c4d9acb727a35c32beb5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:28.471139 containerd[1565]: time="2025-07-15T23:59:28.471097821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:28.541371 containerd[1565]: time="2025-07-15T23:59:28.541263414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hjxmw,Uid:7f2b822f-8c23-4f99-836e-74e8efcaaf0b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"461958c64378aeb033a7344d91d29e1e6f0fbea264801ed04839a51e2dface16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:28.541612 kubelet[2702]: E0715 23:59:28.541565 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461958c64378aeb033a7344d91d29e1e6f0fbea264801ed04839a51e2dface16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:28.541668 kubelet[2702]: E0715 23:59:28.541646 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461958c64378aeb033a7344d91d29e1e6f0fbea264801ed04839a51e2dface16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hjxmw" Jul 15 23:59:28.541707 kubelet[2702]: E0715 23:59:28.541674 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461958c64378aeb033a7344d91d29e1e6f0fbea264801ed04839a51e2dface16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hjxmw" Jul 15 23:59:28.541756 kubelet[2702]: E0715 23:59:28.541728 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hjxmw_kube-system(7f2b822f-8c23-4f99-836e-74e8efcaaf0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hjxmw_kube-system(7f2b822f-8c23-4f99-836e-74e8efcaaf0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"461958c64378aeb033a7344d91d29e1e6f0fbea264801ed04839a51e2dface16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hjxmw" podUID="7f2b822f-8c23-4f99-836e-74e8efcaaf0b" Jul 15 23:59:28.603759 containerd[1565]: time="2025-07-15T23:59:28.603540596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f64d6979-5kwll,Uid:c5d6104a-41a6-410f-928e-c8788c0d34a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"101a735b2ef4ae80f9254aaf9f0d533c984300ab49176b8a41cb8794b509829d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:28.603965 kubelet[2702]: E0715 23:59:28.603881 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"101a735b2ef4ae80f9254aaf9f0d533c984300ab49176b8a41cb8794b509829d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:28.603965 kubelet[2702]: E0715 23:59:28.603951 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"101a735b2ef4ae80f9254aaf9f0d533c984300ab49176b8a41cb8794b509829d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f64d6979-5kwll" Jul 15 23:59:28.604072 kubelet[2702]: E0715 23:59:28.603972 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"101a735b2ef4ae80f9254aaf9f0d533c984300ab49176b8a41cb8794b509829d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f64d6979-5kwll" Jul 15 23:59:28.604072 kubelet[2702]: E0715 23:59:28.604018 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f64d6979-5kwll_calico-apiserver(c5d6104a-41a6-410f-928e-c8788c0d34a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f64d6979-5kwll_calico-apiserver(c5d6104a-41a6-410f-928e-c8788c0d34a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"101a735b2ef4ae80f9254aaf9f0d533c984300ab49176b8a41cb8794b509829d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f64d6979-5kwll" podUID="c5d6104a-41a6-410f-928e-c8788c0d34a0" Jul 15 23:59:28.627921 containerd[1565]: time="2025-07-15T23:59:28.627780608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gm476,Uid:7d7f1199-1547-4be5-8aa6-0ec801a111c9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"22143f95243fbd3f5889f0b3bb8c72400447776c360c4d9acb727a35c32beb5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:28.628273 kubelet[2702]: E0715 23:59:28.628100 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22143f95243fbd3f5889f0b3bb8c72400447776c360c4d9acb727a35c32beb5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:28.628273 kubelet[2702]: E0715 23:59:28.628160 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22143f95243fbd3f5889f0b3bb8c72400447776c360c4d9acb727a35c32beb5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gm476" Jul 15 23:59:28.628273 kubelet[2702]: E0715 23:59:28.628182 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22143f95243fbd3f5889f0b3bb8c72400447776c360c4d9acb727a35c32beb5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gm476" Jul 15 23:59:28.628369 kubelet[2702]: E0715 23:59:28.628220 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gm476_kube-system(7d7f1199-1547-4be5-8aa6-0ec801a111c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gm476_kube-system(7d7f1199-1547-4be5-8aa6-0ec801a111c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22143f95243fbd3f5889f0b3bb8c72400447776c360c4d9acb727a35c32beb5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gm476" podUID="7d7f1199-1547-4be5-8aa6-0ec801a111c9" Jul 15 23:59:28.646513 containerd[1565]: time="2025-07-15T23:59:28.646418539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 15 23:59:28.748578 containerd[1565]: time="2025-07-15T23:59:28.748506068Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:28.821600 containerd[1565]: time="2025-07-15T23:59:28.821473980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:28.822422 containerd[1565]: time="2025-07-15T23:59:28.822041084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 12.860035163s" Jul 15 23:59:28.822422 containerd[1565]: time="2025-07-15T23:59:28.822090266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 23:59:28.846441 containerd[1565]: time="2025-07-15T23:59:28.846075841Z" level=info msg="CreateContainer within sandbox \"9f40352200eaf4e690cabc8da245f143245a4963c26f0af278f934fca4080e53\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 23:59:28.857957 containerd[1565]: time="2025-07-15T23:59:28.857815138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d9bbf597-qj6jm,Uid:60ea8394-7c5f-487c-b841-fc0d13d92798,Namespace:calico-system,Attempt:0,}" Jul 15 23:59:28.858099 containerd[1565]: time="2025-07-15T23:59:28.858070807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77686895cb-fx2dk,Uid:c25d4aa9-c262-4230-8df3-19ee05cb967f,Namespace:calico-system,Attempt:0,}" Jul 15 23:59:28.858255 containerd[1565]: time="2025-07-15T23:59:28.858219637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m7vc8,Uid:4f0bea4f-a598-42a6-9a83-eca16f1aebc6,Namespace:calico-system,Attempt:0,}" Jul 15 23:59:29.110715 systemd[1]: run-netns-cni\x2d09d84eb8\x2d9808\x2d7a0e\x2d2ccf\x2dde9e95000a78.mount: Deactivated successfully. Jul 15 23:59:29.764794 containerd[1565]: time="2025-07-15T23:59:29.764700735Z" level=error msg="Failed to destroy network for sandbox \"feb4715f38aa26e04924edc6d9153189e735fcbf8d3361244c883ff5e66e0e42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:29.767145 systemd[1]: run-netns-cni\x2d3d96beb1\x2d7398\x2dee5b\x2d543f\x2d11447cae0963.mount: Deactivated successfully. Jul 15 23:59:29.940549 containerd[1565]: time="2025-07-15T23:59:29.940164773Z" level=info msg="Container 13cf351bbaea3a8f5cddd4e7ceaf52ffe44144fea7b93c45cb5b259d254b7691: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:30.007856 containerd[1565]: time="2025-07-15T23:59:30.007793719Z" level=error msg="Failed to destroy network for sandbox \"765bf47b2388588b6ce032e3e6cf8ff298257117dafe6c745610acb9ce0112e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:30.019817 containerd[1565]: time="2025-07-15T23:59:30.019664160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d9bbf597-qj6jm,Uid:60ea8394-7c5f-487c-b841-fc0d13d92798,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"feb4715f38aa26e04924edc6d9153189e735fcbf8d3361244c883ff5e66e0e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:30.020266 kubelet[2702]: E0715 23:59:30.020223 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feb4715f38aa26e04924edc6d9153189e735fcbf8d3361244c883ff5e66e0e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:30.021016 kubelet[2702]: E0715 23:59:30.020961 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feb4715f38aa26e04924edc6d9153189e735fcbf8d3361244c883ff5e66e0e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d9bbf597-qj6jm" Jul 15 23:59:30.021016 kubelet[2702]: E0715 23:59:30.021007 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feb4715f38aa26e04924edc6d9153189e735fcbf8d3361244c883ff5e66e0e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d9bbf597-qj6jm" Jul 15 23:59:30.021374 kubelet[2702]: E0715 23:59:30.021063 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d9bbf597-qj6jm_calico-system(60ea8394-7c5f-487c-b841-fc0d13d92798)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d9bbf597-qj6jm_calico-system(60ea8394-7c5f-487c-b841-fc0d13d92798)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"feb4715f38aa26e04924edc6d9153189e735fcbf8d3361244c883ff5e66e0e42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d9bbf597-qj6jm" podUID="60ea8394-7c5f-487c-b841-fc0d13d92798" Jul 15 23:59:30.059226 containerd[1565]: time="2025-07-15T23:59:30.059058308Z" level=error msg="Failed to destroy network for sandbox \"cfcc14b58dcc9b96f0fed3ff37845968ce7a444e1f34075fb8c6e65a1a45d7a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:30.108532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2420250542.mount: Deactivated successfully. Jul 15 23:59:30.108698 systemd[1]: run-netns-cni\x2da1d53d51\x2d062b\x2dd8ef\x2d78cd\x2dba3d4b5787b5.mount: Deactivated successfully. Jul 15 23:59:30.108801 systemd[1]: run-netns-cni\x2d45948b21\x2d24fc\x2d71bd\x2da59f\x2d80bf6f1d7feb.mount: Deactivated successfully. Jul 15 23:59:30.422599 containerd[1565]: time="2025-07-15T23:59:30.422328145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m7vc8,Uid:4f0bea4f-a598-42a6-9a83-eca16f1aebc6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfcc14b58dcc9b96f0fed3ff37845968ce7a444e1f34075fb8c6e65a1a45d7a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:30.422956 kubelet[2702]: E0715 23:59:30.422841 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfcc14b58dcc9b96f0fed3ff37845968ce7a444e1f34075fb8c6e65a1a45d7a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:30.423371 kubelet[2702]: E0715 23:59:30.422968 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfcc14b58dcc9b96f0fed3ff37845968ce7a444e1f34075fb8c6e65a1a45d7a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-m7vc8" Jul 15 23:59:30.423371 kubelet[2702]: E0715 23:59:30.422991 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfcc14b58dcc9b96f0fed3ff37845968ce7a444e1f34075fb8c6e65a1a45d7a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-m7vc8" Jul 15 23:59:30.423371 kubelet[2702]: E0715 23:59:30.423094 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-m7vc8_calico-system(4f0bea4f-a598-42a6-9a83-eca16f1aebc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-m7vc8_calico-system(4f0bea4f-a598-42a6-9a83-eca16f1aebc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfcc14b58dcc9b96f0fed3ff37845968ce7a444e1f34075fb8c6e65a1a45d7a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-m7vc8" podUID="4f0bea4f-a598-42a6-9a83-eca16f1aebc6" Jul 15 23:59:30.487698 containerd[1565]: time="2025-07-15T23:59:30.487615511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77686895cb-fx2dk,Uid:c25d4aa9-c262-4230-8df3-19ee05cb967f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"765bf47b2388588b6ce032e3e6cf8ff298257117dafe6c745610acb9ce0112e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:30.488009 kubelet[2702]: E0715 23:59:30.487959 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"765bf47b2388588b6ce032e3e6cf8ff298257117dafe6c745610acb9ce0112e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:30.488069 kubelet[2702]: E0715 23:59:30.488033 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"765bf47b2388588b6ce032e3e6cf8ff298257117dafe6c745610acb9ce0112e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77686895cb-fx2dk" Jul 15 23:59:30.488069 kubelet[2702]: E0715 23:59:30.488054 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"765bf47b2388588b6ce032e3e6cf8ff298257117dafe6c745610acb9ce0112e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77686895cb-fx2dk" Jul 15 23:59:30.488215 kubelet[2702]: E0715 23:59:30.488136 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77686895cb-fx2dk_calico-system(c25d4aa9-c262-4230-8df3-19ee05cb967f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77686895cb-fx2dk_calico-system(c25d4aa9-c262-4230-8df3-19ee05cb967f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"765bf47b2388588b6ce032e3e6cf8ff298257117dafe6c745610acb9ce0112e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77686895cb-fx2dk" podUID="c25d4aa9-c262-4230-8df3-19ee05cb967f" Jul 15 23:59:30.492950 containerd[1565]: time="2025-07-15T23:59:30.492899144Z" level=info msg="CreateContainer within sandbox \"9f40352200eaf4e690cabc8da245f143245a4963c26f0af278f934fca4080e53\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"13cf351bbaea3a8f5cddd4e7ceaf52ffe44144fea7b93c45cb5b259d254b7691\"" Jul 15 23:59:30.498897 containerd[1565]: time="2025-07-15T23:59:30.498679700Z" level=info msg="StartContainer for \"13cf351bbaea3a8f5cddd4e7ceaf52ffe44144fea7b93c45cb5b259d254b7691\"" Jul 15 23:59:30.500813 containerd[1565]: time="2025-07-15T23:59:30.500764782Z" level=info msg="connecting to shim 13cf351bbaea3a8f5cddd4e7ceaf52ffe44144fea7b93c45cb5b259d254b7691" address="unix:///run/containerd/s/7ff341f606fb84e69422ee1c2b8af6dfe34065d660e44612a901cde1be9c9e29" protocol=ttrpc version=3 Jul 15 23:59:30.541550 systemd[1]: Started cri-containerd-13cf351bbaea3a8f5cddd4e7ceaf52ffe44144fea7b93c45cb5b259d254b7691.scope - libcontainer container 13cf351bbaea3a8f5cddd4e7ceaf52ffe44144fea7b93c45cb5b259d254b7691. Jul 15 23:59:30.594310 containerd[1565]: time="2025-07-15T23:59:30.594265530Z" level=info msg="StartContainer for \"13cf351bbaea3a8f5cddd4e7ceaf52ffe44144fea7b93c45cb5b259d254b7691\" returns successfully" Jul 15 23:59:30.680781 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 23:59:30.680902 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 23:59:30.857360 containerd[1565]: time="2025-07-15T23:59:30.857250220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f64d6979-2m69b,Uid:20f53838-cc61-41d5-95ec-cc1e15cdb769,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:59:31.107730 containerd[1565]: time="2025-07-15T23:59:31.107516358Z" level=error msg="Failed to destroy network for sandbox \"f63d37ffd4ded858d0da778c22bdc3d07680967ebf91787313d51a4f9b3974a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:31.110306 systemd[1]: run-netns-cni\x2d8cb2eade\x2d55f6\x2dde0d\x2db0d9\x2dfae22ada34d9.mount: Deactivated successfully. Jul 15 23:59:31.195049 containerd[1565]: time="2025-07-15T23:59:31.194972860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f64d6979-2m69b,Uid:20f53838-cc61-41d5-95ec-cc1e15cdb769,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63d37ffd4ded858d0da778c22bdc3d07680967ebf91787313d51a4f9b3974a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:31.195316 kubelet[2702]: E0715 23:59:31.195269 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63d37ffd4ded858d0da778c22bdc3d07680967ebf91787313d51a4f9b3974a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:59:31.195841 kubelet[2702]: E0715 23:59:31.195343 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63d37ffd4ded858d0da778c22bdc3d07680967ebf91787313d51a4f9b3974a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f64d6979-2m69b" Jul 15 23:59:31.195841 kubelet[2702]: E0715 23:59:31.195370 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63d37ffd4ded858d0da778c22bdc3d07680967ebf91787313d51a4f9b3974a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f64d6979-2m69b" Jul 15 23:59:31.195841 kubelet[2702]: E0715 23:59:31.195474 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f64d6979-2m69b_calico-apiserver(20f53838-cc61-41d5-95ec-cc1e15cdb769)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f64d6979-2m69b_calico-apiserver(20f53838-cc61-41d5-95ec-cc1e15cdb769)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f63d37ffd4ded858d0da778c22bdc3d07680967ebf91787313d51a4f9b3974a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f64d6979-2m69b" podUID="20f53838-cc61-41d5-95ec-cc1e15cdb769" Jul 15 23:59:31.269824 containerd[1565]: time="2025-07-15T23:59:31.269750915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13cf351bbaea3a8f5cddd4e7ceaf52ffe44144fea7b93c45cb5b259d254b7691\" id:\"9f91b0d27cb9fde1d36dc6f9a212af3b726e0d2dc0e1824f1afddf84b2a075c9\" pid:3955 exit_status:1 exited_at:{seconds:1752623971 nanos:269318654}" Jul 15 23:59:31.858720 containerd[1565]: time="2025-07-15T23:59:31.858634035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krpv9,Uid:5a8939d7-2475-4d58-9e48-7fc1e896bab6,Namespace:calico-system,Attempt:0,}" Jul 15 23:59:32.087267 kubelet[2702]: I0715 23:59:32.087116 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4jckp" podStartSLOduration=4.486459859 podStartE2EDuration="33.087097611s" podCreationTimestamp="2025-07-15 23:58:59 +0000 UTC" firstStartedPulling="2025-07-15 23:59:00.229862304 +0000 UTC m=+18.478315765" lastFinishedPulling="2025-07-15 23:59:28.830500056 +0000 UTC m=+47.078953517" observedRunningTime="2025-07-15 23:59:31.654666666 +0000 UTC m=+49.903120147" watchObservedRunningTime="2025-07-15 23:59:32.087097611 +0000 UTC m=+50.335551072" Jul 15 23:59:32.167349 containerd[1565]: time="2025-07-15T23:59:32.167150225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13cf351bbaea3a8f5cddd4e7ceaf52ffe44144fea7b93c45cb5b259d254b7691\" id:\"6985aeb77ba6c035cc66dffa397d2d332a181853a5044d75bbd609bba20ea828\" pid:3987 exit_status:1 exited_at:{seconds:1752623972 nanos:166842728}" Jul 15 23:59:32.181432 kubelet[2702]: I0715 23:59:32.180854 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c25d4aa9-c262-4230-8df3-19ee05cb967f-whisker-backend-key-pair\") pod \"c25d4aa9-c262-4230-8df3-19ee05cb967f\" (UID: \"c25d4aa9-c262-4230-8df3-19ee05cb967f\") " Jul 15 23:59:32.181432 kubelet[2702]: I0715 23:59:32.180931 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jc7q\" (UniqueName: \"kubernetes.io/projected/c25d4aa9-c262-4230-8df3-19ee05cb967f-kube-api-access-9jc7q\") pod \"c25d4aa9-c262-4230-8df3-19ee05cb967f\" (UID: \"c25d4aa9-c262-4230-8df3-19ee05cb967f\") " Jul 15 23:59:32.181432 kubelet[2702]: I0715 23:59:32.180958 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c25d4aa9-c262-4230-8df3-19ee05cb967f-whisker-ca-bundle\") pod \"c25d4aa9-c262-4230-8df3-19ee05cb967f\" (UID: \"c25d4aa9-c262-4230-8df3-19ee05cb967f\") " Jul 15 23:59:32.185820 kubelet[2702]: I0715 23:59:32.185741 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c25d4aa9-c262-4230-8df3-19ee05cb967f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c25d4aa9-c262-4230-8df3-19ee05cb967f" (UID: "c25d4aa9-c262-4230-8df3-19ee05cb967f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:59:32.195734 kubelet[2702]: I0715 23:59:32.195645 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c25d4aa9-c262-4230-8df3-19ee05cb967f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c25d4aa9-c262-4230-8df3-19ee05cb967f" (UID: "c25d4aa9-c262-4230-8df3-19ee05cb967f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 23:59:32.198516 kubelet[2702]: I0715 23:59:32.198368 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c25d4aa9-c262-4230-8df3-19ee05cb967f-kube-api-access-9jc7q" (OuterVolumeSpecName: "kube-api-access-9jc7q") pod "c25d4aa9-c262-4230-8df3-19ee05cb967f" (UID: "c25d4aa9-c262-4230-8df3-19ee05cb967f"). InnerVolumeSpecName "kube-api-access-9jc7q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:59:32.200694 systemd[1]: var-lib-kubelet-pods-c25d4aa9\x2dc262\x2d4230\x2d8df3\x2d19ee05cb967f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9jc7q.mount: Deactivated successfully. Jul 15 23:59:32.200854 systemd[1]: var-lib-kubelet-pods-c25d4aa9\x2dc262\x2d4230\x2d8df3\x2d19ee05cb967f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 23:59:32.282164 kubelet[2702]: I0715 23:59:32.282100 2702 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9jc7q\" (UniqueName: \"kubernetes.io/projected/c25d4aa9-c262-4230-8df3-19ee05cb967f-kube-api-access-9jc7q\") on node \"localhost\" DevicePath \"\"" Jul 15 23:59:32.282164 kubelet[2702]: I0715 23:59:32.282140 2702 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c25d4aa9-c262-4230-8df3-19ee05cb967f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 15 23:59:32.282164 kubelet[2702]: I0715 23:59:32.282149 2702 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c25d4aa9-c262-4230-8df3-19ee05cb967f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 15 23:59:32.415854 systemd-networkd[1489]: cali9824da76c73: Link UP Jul 15 23:59:32.416893 systemd-networkd[1489]: cali9824da76c73: Gained carrier Jul 15 23:59:32.438596 containerd[1565]: 2025-07-15 23:59:32.120 [INFO][3994] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:59:32.438596 containerd[1565]: 2025-07-15 23:59:32.186 [INFO][3994] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--krpv9-eth0 csi-node-driver- calico-system 5a8939d7-2475-4d58-9e48-7fc1e896bab6 751 0 2025-07-15 23:58:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-krpv9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9824da76c73 [] [] }} ContainerID="5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" Namespace="calico-system" Pod="csi-node-driver-krpv9" WorkloadEndpoint="localhost-k8s-csi--node--driver--krpv9-" Jul 15 23:59:32.438596 containerd[1565]: 2025-07-15 23:59:32.188 [INFO][3994] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" Namespace="calico-system" Pod="csi-node-driver-krpv9" WorkloadEndpoint="localhost-k8s-csi--node--driver--krpv9-eth0" Jul 15 23:59:32.438596 containerd[1565]: 2025-07-15 23:59:32.276 [INFO][4016] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" HandleID="k8s-pod-network.5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" Workload="localhost-k8s-csi--node--driver--krpv9-eth0" Jul 15 23:59:32.438958 containerd[1565]: 2025-07-15 23:59:32.277 [INFO][4016] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" HandleID="k8s-pod-network.5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" Workload="localhost-k8s-csi--node--driver--krpv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002af010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-krpv9", "timestamp":"2025-07-15 23:59:32.276624431 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:59:32.438958 containerd[1565]: 2025-07-15 23:59:32.277 [INFO][4016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:32.438958 containerd[1565]: 2025-07-15 23:59:32.277 [INFO][4016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:32.438958 containerd[1565]: 2025-07-15 23:59:32.277 [INFO][4016] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:59:32.438958 containerd[1565]: 2025-07-15 23:59:32.322 [INFO][4016] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" host="localhost" Jul 15 23:59:32.438958 containerd[1565]: 2025-07-15 23:59:32.335 [INFO][4016] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:59:32.438958 containerd[1565]: 2025-07-15 23:59:32.342 [INFO][4016] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:59:32.438958 containerd[1565]: 2025-07-15 23:59:32.344 [INFO][4016] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:32.438958 containerd[1565]: 2025-07-15 23:59:32.347 [INFO][4016] ipam/ipam.go 208: Affinity has not been confirmed - attempt to confirm it cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:32.439257 containerd[1565]: 2025-07-15 23:59:32.350 [ERROR][4016] ipam/customresource.go 184: Error updating resource Key=BlockAffinity(localhost-192-168-88-128-26) Name="localhost-192-168-88-128-26" Resource="BlockAffinities" Value=&v3.BlockAffinity{TypeMeta:v1.TypeMeta{Kind:"BlockAffinity", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.BlockAffinitySpec{State:"pending", Node:"localhost", Type:"host", CIDR:"192.168.88.128/26", Deleted:"false"}} error=Operation cannot be fulfilled on blockaffinities.crd.projectcalico.org "localhost-192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again Jul 15 23:59:32.439257 containerd[1565]: 2025-07-15 23:59:32.350 [WARNING][4016] ipam/ipam.go 212: Error marking affinity as pending as part of confirmation process cidr=192.168.88.128/26 error=update conflict: BlockAffinity(localhost-192-168-88-128-26) host="localhost" Jul 15 23:59:32.439257 containerd[1565]: 2025-07-15 23:59:32.350 [INFO][4016] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:59:32.439257 containerd[1565]: 2025-07-15 23:59:32.353 [INFO][4016] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:32.439257 containerd[1565]: 2025-07-15 23:59:32.357 [INFO][4016] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:32.439257 containerd[1565]: 2025-07-15 23:59:32.357 [INFO][4016] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" host="localhost" Jul 15 23:59:32.439528 containerd[1565]: 2025-07-15 23:59:32.359 [INFO][4016] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994 Jul 15 23:59:32.439528 containerd[1565]: 2025-07-15 23:59:32.363 [INFO][4016] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" host="localhost" Jul 15 23:59:32.439584 containerd[1565]: 2025-07-15 23:59:32.366 [ERROR][4016] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-88-128-26) Name="192-168-88-128-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.88.128/26", Affinity:(*string)(0xc00047d7a0), Allocations:[]*int{(*int)(0xc000010920), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0002af010), AttrSecondary:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-krpv9", "timestamp":"2025-07-15 23:59:32.276624431 +0000 UTC"}}}, SequenceNumber:0x185292378fdf540f, SequenceNumberForAllocation:map[string]uint64{"0":0x185292378fdf540e}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again Jul 15 23:59:32.439584 containerd[1565]: 2025-07-15 23:59:32.366 [INFO][4016] ipam/ipam.go 1247: Failed to update block block=192.168.88.128/26 error=update conflict: IPAMBlock(192-168-88-128-26) handle="k8s-pod-network.5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" host="localhost" Jul 15 23:59:32.439584 containerd[1565]: 2025-07-15 23:59:32.384 [INFO][4016] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" host="localhost" Jul 15 23:59:32.439584 containerd[1565]: 2025-07-15 23:59:32.386 [INFO][4016] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994 Jul 15 23:59:32.439584 containerd[1565]: 2025-07-15 23:59:32.392 [INFO][4016] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" host="localhost" Jul 15 23:59:32.439584 containerd[1565]: 2025-07-15 23:59:32.399 [INFO][4016] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" host="localhost" Jul 15 23:59:32.439584 containerd[1565]: 2025-07-15 23:59:32.399 [INFO][4016] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" host="localhost" Jul 15 23:59:32.439584 containerd[1565]: 2025-07-15 23:59:32.399 [INFO][4016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:32.439584 containerd[1565]: 2025-07-15 23:59:32.399 [INFO][4016] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" HandleID="k8s-pod-network.5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" Workload="localhost-k8s-csi--node--driver--krpv9-eth0" Jul 15 23:59:32.439888 containerd[1565]: 2025-07-15 23:59:32.404 [INFO][3994] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" Namespace="calico-system" Pod="csi-node-driver-krpv9" WorkloadEndpoint="localhost-k8s-csi--node--driver--krpv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--krpv9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a8939d7-2475-4d58-9e48-7fc1e896bab6", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-krpv9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9824da76c73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:32.439888 containerd[1565]: 2025-07-15 23:59:32.404 [INFO][3994] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" Namespace="calico-system" Pod="csi-node-driver-krpv9" WorkloadEndpoint="localhost-k8s-csi--node--driver--krpv9-eth0" Jul 15 23:59:32.439888 containerd[1565]: 2025-07-15 23:59:32.404 [INFO][3994] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9824da76c73 ContainerID="5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" Namespace="calico-system" Pod="csi-node-driver-krpv9" WorkloadEndpoint="localhost-k8s-csi--node--driver--krpv9-eth0" Jul 15 23:59:32.439888 containerd[1565]: 2025-07-15 23:59:32.417 [INFO][3994] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" Namespace="calico-system" Pod="csi-node-driver-krpv9" WorkloadEndpoint="localhost-k8s-csi--node--driver--krpv9-eth0" Jul 15 23:59:32.439888 containerd[1565]: 2025-07-15 23:59:32.418 [INFO][3994] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" Namespace="calico-system" Pod="csi-node-driver-krpv9" WorkloadEndpoint="localhost-k8s-csi--node--driver--krpv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--krpv9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a8939d7-2475-4d58-9e48-7fc1e896bab6", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994", Pod:"csi-node-driver-krpv9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9824da76c73", MAC:"3a:d0:fa:c4:ef:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:32.439888 containerd[1565]: 2025-07-15 23:59:32.432 [INFO][3994] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" Namespace="calico-system" Pod="csi-node-driver-krpv9" WorkloadEndpoint="localhost-k8s-csi--node--driver--krpv9-eth0" Jul 15 23:59:32.455825 systemd[1]: Started sshd@7-10.0.0.136:22-10.0.0.1:48514.service - OpenSSH per-connection server daemon (10.0.0.1:48514). Jul 15 23:59:32.526191 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 48514 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:59:32.529290 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:32.537967 systemd-logind[1548]: New session 8 of user core. Jul 15 23:59:32.549703 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:59:32.592104 containerd[1565]: time="2025-07-15T23:59:32.592013462Z" level=info msg="connecting to shim 5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994" address="unix:///run/containerd/s/246defb429ffbdb027b5a2586745fa254dc0a8633ee7e130b21b3616e153fe6d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:32.624565 systemd[1]: Started cri-containerd-5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994.scope - libcontainer container 5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994. Jul 15 23:59:32.640563 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:59:32.664487 containerd[1565]: time="2025-07-15T23:59:32.664427958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krpv9,Uid:5a8939d7-2475-4d58-9e48-7fc1e896bab6,Namespace:calico-system,Attempt:0,} returns sandbox id \"5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994\"" Jul 15 23:59:32.666509 containerd[1565]: time="2025-07-15T23:59:32.666486690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 23:59:32.719476 sshd[4053]: Connection closed by 10.0.0.1 port 48514 Jul 15 23:59:32.719111 sshd-session[4044]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:32.727741 systemd[1]: sshd@7-10.0.0.136:22-10.0.0.1:48514.service: Deactivated successfully. Jul 15 23:59:32.730507 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:59:32.731976 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:59:32.733907 systemd-logind[1548]: Removed session 8. Jul 15 23:59:33.050209 systemd[1]: Removed slice kubepods-besteffort-podc25d4aa9_c262_4230_8df3_19ee05cb967f.slice - libcontainer container kubepods-besteffort-podc25d4aa9_c262_4230_8df3_19ee05cb967f.slice. Jul 15 23:59:33.060177 kubelet[2702]: E0715 23:59:33.060093 2702 cadvisor_stats_provider.go:522] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc25d4aa9_c262_4230_8df3_19ee05cb967f.slice\": RecentStats: unable to find data in memory cache]" Jul 15 23:59:33.118948 systemd[1]: Created slice kubepods-besteffort-pod249ac6d3_5e38_4a16_8172_621ccf00333d.slice - libcontainer container kubepods-besteffort-pod249ac6d3_5e38_4a16_8172_621ccf00333d.slice. Jul 15 23:59:33.189200 kubelet[2702]: I0715 23:59:33.189138 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/249ac6d3-5e38-4a16-8172-621ccf00333d-whisker-backend-key-pair\") pod \"whisker-b44d9dd-rqgtb\" (UID: \"249ac6d3-5e38-4a16-8172-621ccf00333d\") " pod="calico-system/whisker-b44d9dd-rqgtb" Jul 15 23:59:33.189200 kubelet[2702]: I0715 23:59:33.189189 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/249ac6d3-5e38-4a16-8172-621ccf00333d-whisker-ca-bundle\") pod \"whisker-b44d9dd-rqgtb\" (UID: \"249ac6d3-5e38-4a16-8172-621ccf00333d\") " pod="calico-system/whisker-b44d9dd-rqgtb" Jul 15 23:59:33.189200 kubelet[2702]: I0715 23:59:33.189209 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh5bh\" (UniqueName: \"kubernetes.io/projected/249ac6d3-5e38-4a16-8172-621ccf00333d-kube-api-access-wh5bh\") pod \"whisker-b44d9dd-rqgtb\" (UID: \"249ac6d3-5e38-4a16-8172-621ccf00333d\") " pod="calico-system/whisker-b44d9dd-rqgtb" Jul 15 23:59:33.424903 containerd[1565]: time="2025-07-15T23:59:33.424720948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b44d9dd-rqgtb,Uid:249ac6d3-5e38-4a16-8172-621ccf00333d,Namespace:calico-system,Attempt:0,}" Jul 15 23:59:33.689099 systemd-networkd[1489]: cali42283a4143d: Link UP Jul 15 23:59:33.690543 systemd-networkd[1489]: cali42283a4143d: Gained carrier Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.462 [INFO][4110] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.480 [INFO][4110] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--b44d9dd--rqgtb-eth0 whisker-b44d9dd- calico-system 249ac6d3-5e38-4a16-8172-621ccf00333d 1013 0 2025-07-15 23:59:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b44d9dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-b44d9dd-rqgtb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali42283a4143d [] [] }} ContainerID="f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" Namespace="calico-system" Pod="whisker-b44d9dd-rqgtb" WorkloadEndpoint="localhost-k8s-whisker--b44d9dd--rqgtb-" Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.481 [INFO][4110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" Namespace="calico-system" Pod="whisker-b44d9dd-rqgtb" WorkloadEndpoint="localhost-k8s-whisker--b44d9dd--rqgtb-eth0" Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.604 [INFO][4187] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" HandleID="k8s-pod-network.f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" Workload="localhost-k8s-whisker--b44d9dd--rqgtb-eth0" Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.609 [INFO][4187] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" HandleID="k8s-pod-network.f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" Workload="localhost-k8s-whisker--b44d9dd--rqgtb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eb00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-b44d9dd-rqgtb", "timestamp":"2025-07-15 23:59:33.604414938 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.609 [INFO][4187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.609 [INFO][4187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.609 [INFO][4187] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.621 [INFO][4187] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" host="localhost" Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.632 [INFO][4187] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.642 [INFO][4187] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.652 [INFO][4187] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.660 [INFO][4187] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.660 [INFO][4187] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" host="localhost" Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.662 [INFO][4187] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0 Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.668 [INFO][4187] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" host="localhost" Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.678 [INFO][4187] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" host="localhost" Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.679 [INFO][4187] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" host="localhost" Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.679 [INFO][4187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:33.709184 containerd[1565]: 2025-07-15 23:59:33.679 [INFO][4187] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" HandleID="k8s-pod-network.f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" Workload="localhost-k8s-whisker--b44d9dd--rqgtb-eth0" Jul 15 23:59:33.710678 containerd[1565]: 2025-07-15 23:59:33.686 [INFO][4110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" Namespace="calico-system" Pod="whisker-b44d9dd-rqgtb" WorkloadEndpoint="localhost-k8s-whisker--b44d9dd--rqgtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b44d9dd--rqgtb-eth0", GenerateName:"whisker-b44d9dd-", Namespace:"calico-system", SelfLink:"", UID:"249ac6d3-5e38-4a16-8172-621ccf00333d", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b44d9dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-b44d9dd-rqgtb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali42283a4143d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:33.710678 containerd[1565]: 2025-07-15 23:59:33.686 [INFO][4110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" Namespace="calico-system" Pod="whisker-b44d9dd-rqgtb" WorkloadEndpoint="localhost-k8s-whisker--b44d9dd--rqgtb-eth0" Jul 15 23:59:33.710678 containerd[1565]: 2025-07-15 23:59:33.686 [INFO][4110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42283a4143d ContainerID="f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" Namespace="calico-system" Pod="whisker-b44d9dd-rqgtb" WorkloadEndpoint="localhost-k8s-whisker--b44d9dd--rqgtb-eth0" Jul 15 23:59:33.710678 containerd[1565]: 2025-07-15 23:59:33.689 [INFO][4110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" Namespace="calico-system" Pod="whisker-b44d9dd-rqgtb" WorkloadEndpoint="localhost-k8s-whisker--b44d9dd--rqgtb-eth0" Jul 15 23:59:33.710678 containerd[1565]: 2025-07-15 23:59:33.689 [INFO][4110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" Namespace="calico-system" Pod="whisker-b44d9dd-rqgtb" WorkloadEndpoint="localhost-k8s-whisker--b44d9dd--rqgtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b44d9dd--rqgtb-eth0", GenerateName:"whisker-b44d9dd-", Namespace:"calico-system", SelfLink:"", UID:"249ac6d3-5e38-4a16-8172-621ccf00333d", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b44d9dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0", Pod:"whisker-b44d9dd-rqgtb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali42283a4143d", MAC:"d2:87:ff:83:4a:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:33.710678 containerd[1565]: 2025-07-15 23:59:33.700 [INFO][4110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" Namespace="calico-system" Pod="whisker-b44d9dd-rqgtb" WorkloadEndpoint="localhost-k8s-whisker--b44d9dd--rqgtb-eth0" Jul 15 23:59:33.771775 containerd[1565]: time="2025-07-15T23:59:33.771682408Z" level=info msg="connecting to shim f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0" address="unix:///run/containerd/s/261026f71674edac528d5ea01ec4ead779571afb0b2f4eea254c1c80bc82bf31" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:33.811553 systemd[1]: Started cri-containerd-f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0.scope - libcontainer container f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0. Jul 15 23:59:33.839986 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:59:33.862966 kubelet[2702]: I0715 23:59:33.862931 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c25d4aa9-c262-4230-8df3-19ee05cb967f" path="/var/lib/kubelet/pods/c25d4aa9-c262-4230-8df3-19ee05cb967f/volumes" Jul 15 23:59:33.882325 containerd[1565]: time="2025-07-15T23:59:33.882241291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b44d9dd-rqgtb,Uid:249ac6d3-5e38-4a16-8172-621ccf00333d,Namespace:calico-system,Attempt:0,} returns sandbox id \"f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0\"" Jul 15 23:59:34.013557 systemd-networkd[1489]: cali9824da76c73: Gained IPv6LL Jul 15 23:59:34.087375 containerd[1565]: time="2025-07-15T23:59:34.087239582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:34.088216 containerd[1565]: time="2025-07-15T23:59:34.088124031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 15 23:59:34.089670 containerd[1565]: time="2025-07-15T23:59:34.089608786Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:34.092009 containerd[1565]: time="2025-07-15T23:59:34.091928707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:34.092673 containerd[1565]: time="2025-07-15T23:59:34.092614424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.426099951s" Jul 15 23:59:34.092673 containerd[1565]: time="2025-07-15T23:59:34.092670158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 23:59:34.097509 containerd[1565]: time="2025-07-15T23:59:34.097460464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 23:59:34.105494 containerd[1565]: time="2025-07-15T23:59:34.105328102Z" level=info msg="CreateContainer within sandbox \"5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 23:59:34.127155 containerd[1565]: time="2025-07-15T23:59:34.127056780Z" level=info msg="Container 6109f9cbfe286c25851517cf64ef42c7916647d1deae39ad52922d4241c66ce0: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:34.147223 containerd[1565]: time="2025-07-15T23:59:34.147161883Z" level=info msg="CreateContainer within sandbox \"5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6109f9cbfe286c25851517cf64ef42c7916647d1deae39ad52922d4241c66ce0\"" Jul 15 23:59:34.147839 containerd[1565]: time="2025-07-15T23:59:34.147802164Z" level=info msg="StartContainer for \"6109f9cbfe286c25851517cf64ef42c7916647d1deae39ad52922d4241c66ce0\"" Jul 15 23:59:34.150895 containerd[1565]: time="2025-07-15T23:59:34.150845193Z" level=info msg="connecting to shim 6109f9cbfe286c25851517cf64ef42c7916647d1deae39ad52922d4241c66ce0" address="unix:///run/containerd/s/246defb429ffbdb027b5a2586745fa254dc0a8633ee7e130b21b3616e153fe6d" protocol=ttrpc version=3 Jul 15 23:59:34.177763 systemd[1]: Started cri-containerd-6109f9cbfe286c25851517cf64ef42c7916647d1deae39ad52922d4241c66ce0.scope - libcontainer container 6109f9cbfe286c25851517cf64ef42c7916647d1deae39ad52922d4241c66ce0. Jul 15 23:59:34.232717 systemd-networkd[1489]: vxlan.calico: Link UP Jul 15 23:59:34.232730 systemd-networkd[1489]: vxlan.calico: Gained carrier Jul 15 23:59:34.253645 containerd[1565]: time="2025-07-15T23:59:34.253532842Z" level=info msg="StartContainer for \"6109f9cbfe286c25851517cf64ef42c7916647d1deae39ad52922d4241c66ce0\" returns successfully" Jul 15 23:59:34.843536 systemd-networkd[1489]: cali42283a4143d: Gained IPv6LL Jul 15 23:59:35.675659 systemd-networkd[1489]: vxlan.calico: Gained IPv6LL Jul 15 23:59:36.132854 containerd[1565]: time="2025-07-15T23:59:36.132773679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:36.133820 containerd[1565]: time="2025-07-15T23:59:36.133777211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 15 23:59:36.135284 containerd[1565]: time="2025-07-15T23:59:36.135242420Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:36.148071 containerd[1565]: time="2025-07-15T23:59:36.138022264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:36.148276 containerd[1565]: time="2025-07-15T23:59:36.138794231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.041282741s" Jul 15 23:59:36.148316 containerd[1565]: time="2025-07-15T23:59:36.148280925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 23:59:36.150373 containerd[1565]: time="2025-07-15T23:59:36.150303078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 23:59:36.151766 containerd[1565]: time="2025-07-15T23:59:36.151716519Z" level=info msg="CreateContainer within sandbox \"f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 23:59:36.162376 containerd[1565]: time="2025-07-15T23:59:36.162299689Z" level=info msg="Container 5511a7b100a085c89e742fbb3706718483fbfb576d60dc6efbbaf9f537286192: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:36.173233 containerd[1565]: time="2025-07-15T23:59:36.173150351Z" level=info msg="CreateContainer within sandbox \"f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5511a7b100a085c89e742fbb3706718483fbfb576d60dc6efbbaf9f537286192\"" Jul 15 23:59:36.173910 containerd[1565]: time="2025-07-15T23:59:36.173864571Z" level=info msg="StartContainer for \"5511a7b100a085c89e742fbb3706718483fbfb576d60dc6efbbaf9f537286192\"" Jul 15 23:59:36.175391 containerd[1565]: time="2025-07-15T23:59:36.175322125Z" level=info msg="connecting to shim 5511a7b100a085c89e742fbb3706718483fbfb576d60dc6efbbaf9f537286192" address="unix:///run/containerd/s/261026f71674edac528d5ea01ec4ead779571afb0b2f4eea254c1c80bc82bf31" protocol=ttrpc version=3 Jul 15 23:59:36.226681 systemd[1]: Started cri-containerd-5511a7b100a085c89e742fbb3706718483fbfb576d60dc6efbbaf9f537286192.scope - libcontainer container 5511a7b100a085c89e742fbb3706718483fbfb576d60dc6efbbaf9f537286192. Jul 15 23:59:36.292731 containerd[1565]: time="2025-07-15T23:59:36.292689845Z" level=info msg="StartContainer for \"5511a7b100a085c89e742fbb3706718483fbfb576d60dc6efbbaf9f537286192\" returns successfully" Jul 15 23:59:37.735644 systemd[1]: Started sshd@8-10.0.0.136:22-10.0.0.1:48518.service - OpenSSH per-connection server daemon (10.0.0.1:48518). Jul 15 23:59:38.125924 sshd[4469]: Accepted publickey for core from 10.0.0.1 port 48518 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:59:38.128009 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:38.133474 systemd-logind[1548]: New session 9 of user core. Jul 15 23:59:38.140657 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:59:38.314693 sshd[4475]: Connection closed by 10.0.0.1 port 48518 Jul 15 23:59:38.315067 sshd-session[4469]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:38.319316 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:59:38.319709 systemd[1]: sshd@8-10.0.0.136:22-10.0.0.1:48518.service: Deactivated successfully. Jul 15 23:59:38.323779 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:59:38.328927 systemd-logind[1548]: Removed session 9. Jul 15 23:59:38.340669 containerd[1565]: time="2025-07-15T23:59:38.340578746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:38.341685 containerd[1565]: time="2025-07-15T23:59:38.341608224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 15 23:59:38.343151 containerd[1565]: time="2025-07-15T23:59:38.343098726Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:38.348230 containerd[1565]: time="2025-07-15T23:59:38.347895853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:38.351645 containerd[1565]: time="2025-07-15T23:59:38.351344655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.200970281s" Jul 15 23:59:38.351645 containerd[1565]: time="2025-07-15T23:59:38.351627563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 23:59:38.353097 containerd[1565]: time="2025-07-15T23:59:38.353015306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 23:59:38.356248 containerd[1565]: time="2025-07-15T23:59:38.356196638Z" level=info msg="CreateContainer within sandbox \"5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 23:59:38.368279 containerd[1565]: time="2025-07-15T23:59:38.368213584Z" level=info msg="Container 4cbfbe7f034295f73346539917993647b163ba37afcd5f33d77abc327d919552: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:38.398958 containerd[1565]: time="2025-07-15T23:59:38.398762095Z" level=info msg="CreateContainer within sandbox \"5cbe5c82402bb5f222356ebbc67e56a0ea754316e3aecb2c519d7431b4d5b994\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4cbfbe7f034295f73346539917993647b163ba37afcd5f33d77abc327d919552\"" Jul 15 23:59:38.399856 containerd[1565]: time="2025-07-15T23:59:38.399788878Z" level=info msg="StartContainer for \"4cbfbe7f034295f73346539917993647b163ba37afcd5f33d77abc327d919552\"" Jul 15 23:59:38.401648 containerd[1565]: time="2025-07-15T23:59:38.401591364Z" level=info msg="connecting to shim 4cbfbe7f034295f73346539917993647b163ba37afcd5f33d77abc327d919552" address="unix:///run/containerd/s/246defb429ffbdb027b5a2586745fa254dc0a8633ee7e130b21b3616e153fe6d" protocol=ttrpc version=3 Jul 15 23:59:38.428641 systemd[1]: Started cri-containerd-4cbfbe7f034295f73346539917993647b163ba37afcd5f33d77abc327d919552.scope - libcontainer container 4cbfbe7f034295f73346539917993647b163ba37afcd5f33d77abc327d919552. Jul 15 23:59:38.476798 containerd[1565]: time="2025-07-15T23:59:38.476740190Z" level=info msg="StartContainer for \"4cbfbe7f034295f73346539917993647b163ba37afcd5f33d77abc327d919552\" returns successfully" Jul 15 23:59:38.928268 kubelet[2702]: I0715 23:59:38.928215 2702 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 23:59:38.928268 kubelet[2702]: I0715 23:59:38.928257 2702 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 23:59:39.071226 kubelet[2702]: I0715 23:59:39.071081 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-krpv9" podStartSLOduration=34.384346251 podStartE2EDuration="40.07105899s" podCreationTimestamp="2025-07-15 23:58:59 +0000 UTC" firstStartedPulling="2025-07-15 23:59:32.66609783 +0000 UTC m=+50.914551291" lastFinishedPulling="2025-07-15 23:59:38.352810559 +0000 UTC m=+56.601264030" observedRunningTime="2025-07-15 23:59:39.070060504 +0000 UTC m=+57.318513985" watchObservedRunningTime="2025-07-15 23:59:39.07105899 +0000 UTC m=+57.319512451" Jul 15 23:59:39.857337 kubelet[2702]: E0715 23:59:39.857284 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:39.857836 containerd[1565]: time="2025-07-15T23:59:39.857786729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gm476,Uid:7d7f1199-1547-4be5-8aa6-0ec801a111c9,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:40.239151 systemd-networkd[1489]: cali4ee607bce01: Link UP Jul 15 23:59:40.240200 systemd-networkd[1489]: cali4ee607bce01: Gained carrier Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.148 [INFO][4533] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--gm476-eth0 coredns-668d6bf9bc- kube-system 7d7f1199-1547-4be5-8aa6-0ec801a111c9 867 0 2025-07-15 23:58:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-gm476 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4ee607bce01 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" Namespace="kube-system" Pod="coredns-668d6bf9bc-gm476" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gm476-" Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.149 [INFO][4533] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" Namespace="kube-system" Pod="coredns-668d6bf9bc-gm476" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gm476-eth0" Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.183 [INFO][4554] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" HandleID="k8s-pod-network.39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" Workload="localhost-k8s-coredns--668d6bf9bc--gm476-eth0" Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.183 [INFO][4554] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" HandleID="k8s-pod-network.39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" Workload="localhost-k8s-coredns--668d6bf9bc--gm476-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-gm476", "timestamp":"2025-07-15 23:59:40.183210164 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.183 [INFO][4554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.183 [INFO][4554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.183 [INFO][4554] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.191 [INFO][4554] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" host="localhost" Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.197 [INFO][4554] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.204 [INFO][4554] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.207 [INFO][4554] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.210 [INFO][4554] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.210 [INFO][4554] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" host="localhost" Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.212 [INFO][4554] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323 Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.216 [INFO][4554] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" host="localhost" Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.229 [INFO][4554] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" host="localhost" Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.229 [INFO][4554] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" host="localhost" Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.229 [INFO][4554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:40.262611 containerd[1565]: 2025-07-15 23:59:40.229 [INFO][4554] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" HandleID="k8s-pod-network.39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" Workload="localhost-k8s-coredns--668d6bf9bc--gm476-eth0" Jul 15 23:59:40.264767 containerd[1565]: 2025-07-15 23:59:40.233 [INFO][4533] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" Namespace="kube-system" Pod="coredns-668d6bf9bc-gm476" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gm476-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--gm476-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d7f1199-1547-4be5-8aa6-0ec801a111c9", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-gm476", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ee607bce01", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:40.264767 containerd[1565]: 2025-07-15 23:59:40.234 [INFO][4533] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" Namespace="kube-system" Pod="coredns-668d6bf9bc-gm476" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gm476-eth0" Jul 15 23:59:40.264767 containerd[1565]: 2025-07-15 23:59:40.234 [INFO][4533] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ee607bce01 ContainerID="39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" Namespace="kube-system" Pod="coredns-668d6bf9bc-gm476" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gm476-eth0" Jul 15 23:59:40.264767 containerd[1565]: 2025-07-15 23:59:40.240 [INFO][4533] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" Namespace="kube-system" Pod="coredns-668d6bf9bc-gm476" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gm476-eth0" Jul 15 23:59:40.264767 containerd[1565]: 2025-07-15 23:59:40.243 [INFO][4533] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" Namespace="kube-system" Pod="coredns-668d6bf9bc-gm476" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gm476-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--gm476-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d7f1199-1547-4be5-8aa6-0ec801a111c9", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323", Pod:"coredns-668d6bf9bc-gm476", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ee607bce01", MAC:"0a:2b:df:43:af:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:40.265288 containerd[1565]: 2025-07-15 23:59:40.257 [INFO][4533] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" Namespace="kube-system" Pod="coredns-668d6bf9bc-gm476" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gm476-eth0" Jul 15 23:59:40.357018 containerd[1565]: time="2025-07-15T23:59:40.356929265Z" level=info msg="connecting to shim 39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323" address="unix:///run/containerd/s/a5787c3a26e7aa11920d634dbcaf8aa9701eef1be2c9be5e72551282c4911edb" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:40.387731 systemd[1]: Started cri-containerd-39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323.scope - libcontainer container 39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323. Jul 15 23:59:40.407827 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:59:40.450877 containerd[1565]: time="2025-07-15T23:59:40.450794609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gm476,Uid:7d7f1199-1547-4be5-8aa6-0ec801a111c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323\"" Jul 15 23:59:40.451531 kubelet[2702]: E0715 23:59:40.451495 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:40.455234 containerd[1565]: time="2025-07-15T23:59:40.455184761Z" level=info msg="CreateContainer within sandbox \"39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:59:40.474497 containerd[1565]: time="2025-07-15T23:59:40.474429882Z" level=info msg="Container d2e01b088196b59d9fc6a16c339f54aa59a943a49a143604e1f501fdce7915c4: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:40.484684 containerd[1565]: time="2025-07-15T23:59:40.484624304Z" level=info msg="CreateContainer within sandbox \"39474f139f613320be36d6c6269799ec9b0ff9b13978805a4978cdb8cc7c3323\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2e01b088196b59d9fc6a16c339f54aa59a943a49a143604e1f501fdce7915c4\"" Jul 15 23:59:40.485743 containerd[1565]: time="2025-07-15T23:59:40.485668837Z" level=info msg="StartContainer for \"d2e01b088196b59d9fc6a16c339f54aa59a943a49a143604e1f501fdce7915c4\"" Jul 15 23:59:40.487121 containerd[1565]: time="2025-07-15T23:59:40.487066905Z" level=info msg="connecting to shim d2e01b088196b59d9fc6a16c339f54aa59a943a49a143604e1f501fdce7915c4" address="unix:///run/containerd/s/a5787c3a26e7aa11920d634dbcaf8aa9701eef1be2c9be5e72551282c4911edb" protocol=ttrpc version=3 Jul 15 23:59:40.522879 systemd[1]: Started cri-containerd-d2e01b088196b59d9fc6a16c339f54aa59a943a49a143604e1f501fdce7915c4.scope - libcontainer container d2e01b088196b59d9fc6a16c339f54aa59a943a49a143604e1f501fdce7915c4. Jul 15 23:59:40.568684 containerd[1565]: time="2025-07-15T23:59:40.568629243Z" level=info msg="StartContainer for \"d2e01b088196b59d9fc6a16c339f54aa59a943a49a143604e1f501fdce7915c4\" returns successfully" Jul 15 23:59:40.679312 containerd[1565]: time="2025-07-15T23:59:40.679228484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:40.690882 containerd[1565]: time="2025-07-15T23:59:40.690803481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 15 23:59:40.692270 containerd[1565]: time="2025-07-15T23:59:40.692220856Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:40.694684 containerd[1565]: time="2025-07-15T23:59:40.694627075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:40.695435 containerd[1565]: time="2025-07-15T23:59:40.695371807Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.342308769s" Jul 15 23:59:40.695487 containerd[1565]: time="2025-07-15T23:59:40.695436943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 23:59:40.697925 containerd[1565]: time="2025-07-15T23:59:40.697877109Z" level=info msg="CreateContainer within sandbox \"f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 23:59:40.711338 containerd[1565]: time="2025-07-15T23:59:40.711259807Z" level=info msg="Container 90d38303cbf1deacea35d5403c5a1f7741b4a08086c7aac63bb75ebb7b7c9094: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:40.724054 containerd[1565]: time="2025-07-15T23:59:40.723997334Z" level=info msg="CreateContainer within sandbox \"f57a0d2a2dee2db2ab1d226d3e77fb73dfd3747b24ba89b1c68d327702e5f2e0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"90d38303cbf1deacea35d5403c5a1f7741b4a08086c7aac63bb75ebb7b7c9094\"" Jul 15 23:59:40.724664 containerd[1565]: time="2025-07-15T23:59:40.724534095Z" level=info msg="StartContainer for \"90d38303cbf1deacea35d5403c5a1f7741b4a08086c7aac63bb75ebb7b7c9094\"" Jul 15 23:59:40.725696 containerd[1565]: time="2025-07-15T23:59:40.725673081Z" level=info msg="connecting to shim 90d38303cbf1deacea35d5403c5a1f7741b4a08086c7aac63bb75ebb7b7c9094" address="unix:///run/containerd/s/261026f71674edac528d5ea01ec4ead779571afb0b2f4eea254c1c80bc82bf31" protocol=ttrpc version=3 Jul 15 23:59:40.762598 systemd[1]: Started cri-containerd-90d38303cbf1deacea35d5403c5a1f7741b4a08086c7aac63bb75ebb7b7c9094.scope - libcontainer container 90d38303cbf1deacea35d5403c5a1f7741b4a08086c7aac63bb75ebb7b7c9094. Jul 15 23:59:40.814409 containerd[1565]: time="2025-07-15T23:59:40.814226937Z" level=info msg="StartContainer for \"90d38303cbf1deacea35d5403c5a1f7741b4a08086c7aac63bb75ebb7b7c9094\" returns successfully" Jul 15 23:59:40.858147 containerd[1565]: time="2025-07-15T23:59:40.858099422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f64d6979-5kwll,Uid:c5d6104a-41a6-410f-928e-c8788c0d34a0,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:59:40.858683 containerd[1565]: time="2025-07-15T23:59:40.858197563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m7vc8,Uid:4f0bea4f-a598-42a6-9a83-eca16f1aebc6,Namespace:calico-system,Attempt:0,}" Jul 15 23:59:41.035355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount512458624.mount: Deactivated successfully. Jul 15 23:59:41.035885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3277398745.mount: Deactivated successfully. Jul 15 23:59:41.066227 systemd-networkd[1489]: cali80e74edebf4: Link UP Jul 15 23:59:41.067487 systemd-networkd[1489]: cali80e74edebf4: Gained carrier Jul 15 23:59:41.074410 kubelet[2702]: E0715 23:59:41.073946 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:40.983 [INFO][4701] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--m7vc8-eth0 goldmane-768f4c5c69- calico-system 4f0bea4f-a598-42a6-9a83-eca16f1aebc6 877 0 2025-07-15 23:58:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-m7vc8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali80e74edebf4 [] [] }} ContainerID="6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7vc8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7vc8-" Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:40.983 [INFO][4701] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7vc8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7vc8-eth0" Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.020 [INFO][4724] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" HandleID="k8s-pod-network.6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" Workload="localhost-k8s-goldmane--768f4c5c69--m7vc8-eth0" Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.020 [INFO][4724] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" HandleID="k8s-pod-network.6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" Workload="localhost-k8s-goldmane--768f4c5c69--m7vc8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-m7vc8", "timestamp":"2025-07-15 23:59:41.020172286 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.020 [INFO][4724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.020 [INFO][4724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.020 [INFO][4724] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.029 [INFO][4724] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" host="localhost" Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.036 [INFO][4724] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.042 [INFO][4724] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.044 [INFO][4724] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.046 [INFO][4724] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.046 [INFO][4724] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" host="localhost" Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.048 [INFO][4724] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001 Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.052 [INFO][4724] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" host="localhost" Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.058 [INFO][4724] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" host="localhost" Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.058 [INFO][4724] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" host="localhost" Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.058 [INFO][4724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:41.088894 containerd[1565]: 2025-07-15 23:59:41.058 [INFO][4724] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" HandleID="k8s-pod-network.6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" Workload="localhost-k8s-goldmane--768f4c5c69--m7vc8-eth0" Jul 15 23:59:41.089543 containerd[1565]: 2025-07-15 23:59:41.061 [INFO][4701] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7vc8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7vc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--m7vc8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"4f0bea4f-a598-42a6-9a83-eca16f1aebc6", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-m7vc8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali80e74edebf4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:41.089543 containerd[1565]: 2025-07-15 23:59:41.062 [INFO][4701] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7vc8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7vc8-eth0" Jul 15 23:59:41.089543 containerd[1565]: 2025-07-15 23:59:41.062 [INFO][4701] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80e74edebf4 ContainerID="6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7vc8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7vc8-eth0" Jul 15 23:59:41.089543 containerd[1565]: 2025-07-15 23:59:41.067 [INFO][4701] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7vc8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7vc8-eth0" Jul 15 23:59:41.089543 containerd[1565]: 2025-07-15 23:59:41.068 [INFO][4701] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7vc8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7vc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--m7vc8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"4f0bea4f-a598-42a6-9a83-eca16f1aebc6", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001", Pod:"goldmane-768f4c5c69-m7vc8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali80e74edebf4", MAC:"ca:f5:85:a9:14:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:41.089543 containerd[1565]: 2025-07-15 23:59:41.081 [INFO][4701] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7vc8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7vc8-eth0" Jul 15 23:59:41.101256 kubelet[2702]: I0715 23:59:41.101112 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gm476" podStartSLOduration=54.101093774 podStartE2EDuration="54.101093774s" podCreationTimestamp="2025-07-15 23:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:59:41.09969107 +0000 UTC m=+59.348144531" watchObservedRunningTime="2025-07-15 23:59:41.101093774 +0000 UTC m=+59.349547235" Jul 15 23:59:41.129617 containerd[1565]: time="2025-07-15T23:59:41.129542951Z" level=info msg="connecting to shim 6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001" address="unix:///run/containerd/s/d7a7aef03c2cf33f46ed13df3e6d49b2691529db9dd9d71af89d3b0e5d3e07a8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:41.147796 kubelet[2702]: I0715 23:59:41.147463 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-b44d9dd-rqgtb" podStartSLOduration=1.336151967 podStartE2EDuration="8.147442255s" podCreationTimestamp="2025-07-15 23:59:33 +0000 UTC" firstStartedPulling="2025-07-15 23:59:33.884951414 +0000 UTC m=+52.133404875" lastFinishedPulling="2025-07-15 23:59:40.696241702 +0000 UTC m=+58.944695163" observedRunningTime="2025-07-15 23:59:41.11785769 +0000 UTC m=+59.366311171" watchObservedRunningTime="2025-07-15 23:59:41.147442255 +0000 UTC m=+59.395895716" Jul 15 23:59:41.180693 systemd[1]: Started cri-containerd-6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001.scope - libcontainer container 6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001. Jul 15 23:59:41.193687 systemd-networkd[1489]: cali6cb95911d34: Link UP Jul 15 23:59:41.194959 systemd-networkd[1489]: cali6cb95911d34: Gained carrier Jul 15 23:59:41.203446 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:40.981 [INFO][4691] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86f64d6979--5kwll-eth0 calico-apiserver-86f64d6979- calico-apiserver c5d6104a-41a6-410f-928e-c8788c0d34a0 873 0 2025-07-15 23:58:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f64d6979 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86f64d6979-5kwll eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6cb95911d34 [] [] }} ContainerID="6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-5kwll" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--5kwll-" Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:40.982 [INFO][4691] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-5kwll" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--5kwll-eth0" Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.020 [INFO][4718] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" HandleID="k8s-pod-network.6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" Workload="localhost-k8s-calico--apiserver--86f64d6979--5kwll-eth0" Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.021 [INFO][4718] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" HandleID="k8s-pod-network.6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" Workload="localhost-k8s-calico--apiserver--86f64d6979--5kwll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86f64d6979-5kwll", "timestamp":"2025-07-15 23:59:41.020845428 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.021 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.058 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.059 [INFO][4718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.130 [INFO][4718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" host="localhost" Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.149 [INFO][4718] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.161 [INFO][4718] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.165 [INFO][4718] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.168 [INFO][4718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.168 [INFO][4718] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" host="localhost" Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.170 [INFO][4718] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227 Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.175 [INFO][4718] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" host="localhost" Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.182 [INFO][4718] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" host="localhost" Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.182 [INFO][4718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" host="localhost" Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.182 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:41.212728 containerd[1565]: 2025-07-15 23:59:41.182 [INFO][4718] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" HandleID="k8s-pod-network.6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" Workload="localhost-k8s-calico--apiserver--86f64d6979--5kwll-eth0" Jul 15 23:59:41.213544 containerd[1565]: 2025-07-15 23:59:41.188 [INFO][4691] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-5kwll" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--5kwll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f64d6979--5kwll-eth0", GenerateName:"calico-apiserver-86f64d6979-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5d6104a-41a6-410f-928e-c8788c0d34a0", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f64d6979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86f64d6979-5kwll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cb95911d34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:41.213544 containerd[1565]: 2025-07-15 23:59:41.188 [INFO][4691] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-5kwll" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--5kwll-eth0" Jul 15 23:59:41.213544 containerd[1565]: 2025-07-15 23:59:41.188 [INFO][4691] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cb95911d34 ContainerID="6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-5kwll" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--5kwll-eth0" Jul 15 23:59:41.213544 containerd[1565]: 2025-07-15 23:59:41.195 [INFO][4691] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-5kwll" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--5kwll-eth0" Jul 15 23:59:41.213544 containerd[1565]: 2025-07-15 23:59:41.197 [INFO][4691] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-5kwll" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--5kwll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f64d6979--5kwll-eth0", GenerateName:"calico-apiserver-86f64d6979-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5d6104a-41a6-410f-928e-c8788c0d34a0", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f64d6979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227", Pod:"calico-apiserver-86f64d6979-5kwll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cb95911d34", MAC:"b6:52:47:26:9c:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:41.213544 containerd[1565]: 2025-07-15 23:59:41.208 [INFO][4691] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-5kwll" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--5kwll-eth0" Jul 15 23:59:41.249171 containerd[1565]: time="2025-07-15T23:59:41.249049458Z" level=info msg="connecting to shim 6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227" address="unix:///run/containerd/s/4e7aeee9d6f73bc6dea140d4f2b223d2cc05d4696541865df7a251003f9b723c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:41.279540 systemd[1]: Started cri-containerd-6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227.scope - libcontainer container 6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227. Jul 15 23:59:41.298010 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:59:41.365485 containerd[1565]: time="2025-07-15T23:59:41.365292140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m7vc8,Uid:4f0bea4f-a598-42a6-9a83-eca16f1aebc6,Namespace:calico-system,Attempt:0,} returns sandbox id \"6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001\"" Jul 15 23:59:41.367333 containerd[1565]: time="2025-07-15T23:59:41.367273894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 23:59:41.435598 systemd-networkd[1489]: cali4ee607bce01: Gained IPv6LL Jul 15 23:59:41.474519 containerd[1565]: time="2025-07-15T23:59:41.474479560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f64d6979-5kwll,Uid:c5d6104a-41a6-410f-928e-c8788c0d34a0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227\"" Jul 15 23:59:42.087937 kubelet[2702]: E0715 23:59:42.087881 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:42.459578 systemd-networkd[1489]: cali6cb95911d34: Gained IPv6LL Jul 15 23:59:42.715613 systemd-networkd[1489]: cali80e74edebf4: Gained IPv6LL Jul 15 23:59:42.857228 kubelet[2702]: E0715 23:59:42.857143 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:42.857695 containerd[1565]: time="2025-07-15T23:59:42.857641868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f64d6979-2m69b,Uid:20f53838-cc61-41d5-95ec-cc1e15cdb769,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:59:42.858240 containerd[1565]: time="2025-07-15T23:59:42.857705832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hjxmw,Uid:7f2b822f-8c23-4f99-836e-74e8efcaaf0b,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:43.090507 kubelet[2702]: E0715 23:59:43.090332 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:43.332852 systemd[1]: Started sshd@9-10.0.0.136:22-10.0.0.1:55256.service - OpenSSH per-connection server daemon (10.0.0.1:55256). Jul 15 23:59:43.591420 systemd-networkd[1489]: calib6bebac5401: Link UP Jul 15 23:59:43.591908 systemd-networkd[1489]: calib6bebac5401: Gained carrier Jul 15 23:59:43.614108 sshd[4908]: Accepted publickey for core from 10.0.0.1 port 55256 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:59:43.616756 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:43.622997 systemd-logind[1548]: New session 10 of user core. Jul 15 23:59:43.632617 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.248 [INFO][4859] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86f64d6979--2m69b-eth0 calico-apiserver-86f64d6979- calico-apiserver 20f53838-cc61-41d5-95ec-cc1e15cdb769 876 0 2025-07-15 23:58:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f64d6979 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86f64d6979-2m69b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6bebac5401 [] [] }} ContainerID="f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-2m69b" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--2m69b-" Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.249 [INFO][4859] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-2m69b" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--2m69b-eth0" Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.288 [INFO][4890] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" HandleID="k8s-pod-network.f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" Workload="localhost-k8s-calico--apiserver--86f64d6979--2m69b-eth0" Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.288 [INFO][4890] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" HandleID="k8s-pod-network.f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" Workload="localhost-k8s-calico--apiserver--86f64d6979--2m69b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002defe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86f64d6979-2m69b", "timestamp":"2025-07-15 23:59:43.288441765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.288 [INFO][4890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.288 [INFO][4890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.288 [INFO][4890] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.317 [INFO][4890] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" host="localhost" Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.323 [INFO][4890] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.329 [INFO][4890] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.332 [INFO][4890] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.335 [INFO][4890] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.335 [INFO][4890] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" host="localhost" Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.336 [INFO][4890] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.418 [INFO][4890] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" host="localhost" Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.583 [INFO][4890] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" host="localhost" Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.583 [INFO][4890] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" host="localhost" Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.583 [INFO][4890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:43.946264 containerd[1565]: 2025-07-15 23:59:43.583 [INFO][4890] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" HandleID="k8s-pod-network.f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" Workload="localhost-k8s-calico--apiserver--86f64d6979--2m69b-eth0" Jul 15 23:59:43.947256 containerd[1565]: 2025-07-15 23:59:43.587 [INFO][4859] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-2m69b" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--2m69b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f64d6979--2m69b-eth0", GenerateName:"calico-apiserver-86f64d6979-", Namespace:"calico-apiserver", SelfLink:"", UID:"20f53838-cc61-41d5-95ec-cc1e15cdb769", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f64d6979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86f64d6979-2m69b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6bebac5401", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:43.947256 containerd[1565]: 2025-07-15 23:59:43.588 [INFO][4859] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-2m69b" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--2m69b-eth0" Jul 15 23:59:43.947256 containerd[1565]: 2025-07-15 23:59:43.588 [INFO][4859] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6bebac5401 ContainerID="f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-2m69b" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--2m69b-eth0" Jul 15 23:59:43.947256 containerd[1565]: 2025-07-15 23:59:43.592 [INFO][4859] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-2m69b" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--2m69b-eth0" Jul 15 23:59:43.947256 containerd[1565]: 2025-07-15 23:59:43.593 [INFO][4859] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-2m69b" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--2m69b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f64d6979--2m69b-eth0", GenerateName:"calico-apiserver-86f64d6979-", Namespace:"calico-apiserver", SelfLink:"", UID:"20f53838-cc61-41d5-95ec-cc1e15cdb769", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f64d6979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f", Pod:"calico-apiserver-86f64d6979-2m69b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6bebac5401", MAC:"3e:75:8b:e7:86:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:43.947256 containerd[1565]: 2025-07-15 23:59:43.942 [INFO][4859] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" Namespace="calico-apiserver" Pod="calico-apiserver-86f64d6979-2m69b" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f64d6979--2m69b-eth0" Jul 15 23:59:44.351930 sshd[4912]: Connection closed by 10.0.0.1 port 55256 Jul 15 23:59:44.352197 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:44.357055 systemd[1]: sshd@9-10.0.0.136:22-10.0.0.1:55256.service: Deactivated successfully. Jul 15 23:59:44.359626 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:59:44.361847 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:59:44.363149 systemd-logind[1548]: Removed session 10. Jul 15 23:59:44.434847 systemd-networkd[1489]: cali484f30eef57: Link UP Jul 15 23:59:44.436537 systemd-networkd[1489]: cali484f30eef57: Gained carrier Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:43.259 [INFO][4870] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--hjxmw-eth0 coredns-668d6bf9bc- kube-system 7f2b822f-8c23-4f99-836e-74e8efcaaf0b 875 0 2025-07-15 23:58:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-hjxmw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali484f30eef57 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" Namespace="kube-system" Pod="coredns-668d6bf9bc-hjxmw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hjxmw-" Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:43.259 [INFO][4870] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" Namespace="kube-system" Pod="coredns-668d6bf9bc-hjxmw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hjxmw-eth0" Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:43.295 [INFO][4897] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" HandleID="k8s-pod-network.33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" Workload="localhost-k8s-coredns--668d6bf9bc--hjxmw-eth0" Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:43.295 [INFO][4897] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" HandleID="k8s-pod-network.33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" Workload="localhost-k8s-coredns--668d6bf9bc--hjxmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-hjxmw", "timestamp":"2025-07-15 23:59:43.29502871 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:43.295 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:43.584 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:43.584 [INFO][4897] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:43.942 [INFO][4897] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" host="localhost" Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:44.383 [INFO][4897] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:44.396 [INFO][4897] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:44.399 [INFO][4897] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:44.403 [INFO][4897] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:44.403 [INFO][4897] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" host="localhost" Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:44.405 [INFO][4897] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92 Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:44.413 [INFO][4897] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" host="localhost" Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:44.421 [INFO][4897] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" host="localhost" Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:44.421 [INFO][4897] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" host="localhost" Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:44.421 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:44.456699 containerd[1565]: 2025-07-15 23:59:44.421 [INFO][4897] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" HandleID="k8s-pod-network.33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" Workload="localhost-k8s-coredns--668d6bf9bc--hjxmw-eth0" Jul 15 23:59:44.458561 containerd[1565]: 2025-07-15 23:59:44.425 [INFO][4870] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" Namespace="kube-system" Pod="coredns-668d6bf9bc-hjxmw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hjxmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hjxmw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7f2b822f-8c23-4f99-836e-74e8efcaaf0b", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-hjxmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali484f30eef57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:44.458561 containerd[1565]: 2025-07-15 23:59:44.426 [INFO][4870] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" Namespace="kube-system" Pod="coredns-668d6bf9bc-hjxmw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hjxmw-eth0" Jul 15 23:59:44.458561 containerd[1565]: 2025-07-15 23:59:44.426 [INFO][4870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali484f30eef57 ContainerID="33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" Namespace="kube-system" Pod="coredns-668d6bf9bc-hjxmw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hjxmw-eth0" Jul 15 23:59:44.458561 containerd[1565]: 2025-07-15 23:59:44.436 [INFO][4870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" Namespace="kube-system" Pod="coredns-668d6bf9bc-hjxmw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hjxmw-eth0" Jul 15 23:59:44.458561 containerd[1565]: 2025-07-15 23:59:44.437 [INFO][4870] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" Namespace="kube-system" Pod="coredns-668d6bf9bc-hjxmw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hjxmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hjxmw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7f2b822f-8c23-4f99-836e-74e8efcaaf0b", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92", Pod:"coredns-668d6bf9bc-hjxmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali484f30eef57", MAC:"26:82:31:37:4e:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:44.458832 containerd[1565]: 2025-07-15 23:59:44.451 [INFO][4870] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" Namespace="kube-system" Pod="coredns-668d6bf9bc-hjxmw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hjxmw-eth0" Jul 15 23:59:44.459960 containerd[1565]: time="2025-07-15T23:59:44.459792366Z" level=info msg="connecting to shim f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f" address="unix:///run/containerd/s/a7b08ab7ac197627b07075af2ea4a6d9fb2c0268e56e1c208c685a0a9a61ade4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:44.512030 containerd[1565]: time="2025-07-15T23:59:44.511963883Z" level=info msg="connecting to shim 33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92" address="unix:///run/containerd/s/23c66cd5586f027064e73e6679252a6d4ee8468aa3d3c6c021df36bd1dfc8a46" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:44.513656 systemd[1]: Started cri-containerd-f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f.scope - libcontainer container f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f. Jul 15 23:59:44.540437 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:59:44.546501 systemd[1]: Started cri-containerd-33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92.scope - libcontainer container 33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92. Jul 15 23:59:44.562907 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:59:44.617598 containerd[1565]: time="2025-07-15T23:59:44.616084287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f64d6979-2m69b,Uid:20f53838-cc61-41d5-95ec-cc1e15cdb769,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f\"" Jul 15 23:59:44.622656 containerd[1565]: time="2025-07-15T23:59:44.622618841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hjxmw,Uid:7f2b822f-8c23-4f99-836e-74e8efcaaf0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92\"" Jul 15 23:59:44.623490 kubelet[2702]: E0715 23:59:44.623455 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:44.625700 containerd[1565]: time="2025-07-15T23:59:44.625662696Z" level=info msg="CreateContainer within sandbox \"33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:59:44.640693 containerd[1565]: time="2025-07-15T23:59:44.640634603Z" level=info msg="Container 0b4dae386acd5fb9c5431ce740217552f738b42bc1609b3331c66cc6a1bb0444: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:44.652776 containerd[1565]: time="2025-07-15T23:59:44.652693989Z" level=info msg="CreateContainer within sandbox \"33e49c09559691aca31d8866ef32f8fee8f79715218fdf5d28dbc3b731d8ec92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b4dae386acd5fb9c5431ce740217552f738b42bc1609b3331c66cc6a1bb0444\"" Jul 15 23:59:44.653597 containerd[1565]: time="2025-07-15T23:59:44.653569830Z" level=info msg="StartContainer for \"0b4dae386acd5fb9c5431ce740217552f738b42bc1609b3331c66cc6a1bb0444\"" Jul 15 23:59:44.655187 containerd[1565]: time="2025-07-15T23:59:44.654993918Z" level=info msg="connecting to shim 0b4dae386acd5fb9c5431ce740217552f738b42bc1609b3331c66cc6a1bb0444" address="unix:///run/containerd/s/23c66cd5586f027064e73e6679252a6d4ee8468aa3d3c6c021df36bd1dfc8a46" protocol=ttrpc version=3 Jul 15 23:59:44.680720 systemd[1]: Started cri-containerd-0b4dae386acd5fb9c5431ce740217552f738b42bc1609b3331c66cc6a1bb0444.scope - libcontainer container 0b4dae386acd5fb9c5431ce740217552f738b42bc1609b3331c66cc6a1bb0444. Jul 15 23:59:44.858032 containerd[1565]: time="2025-07-15T23:59:44.857624739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d9bbf597-qj6jm,Uid:60ea8394-7c5f-487c-b841-fc0d13d92798,Namespace:calico-system,Attempt:0,}" Jul 15 23:59:44.956174 containerd[1565]: time="2025-07-15T23:59:44.956007059Z" level=info msg="StartContainer for \"0b4dae386acd5fb9c5431ce740217552f738b42bc1609b3331c66cc6a1bb0444\" returns successfully" Jul 15 23:59:45.099836 kubelet[2702]: E0715 23:59:45.099730 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:45.162556 kubelet[2702]: I0715 23:59:45.162434 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hjxmw" podStartSLOduration=58.162416912 podStartE2EDuration="58.162416912s" podCreationTimestamp="2025-07-15 23:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:59:45.161624254 +0000 UTC m=+63.410077725" watchObservedRunningTime="2025-07-15 23:59:45.162416912 +0000 UTC m=+63.410870373" Jul 15 23:59:45.275627 systemd-networkd[1489]: calib6bebac5401: Gained IPv6LL Jul 15 23:59:45.397864 systemd-networkd[1489]: cali4a8716727d0: Link UP Jul 15 23:59:45.398827 systemd-networkd[1489]: cali4a8716727d0: Gained carrier Jul 15 23:59:45.432180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3876710728.mount: Deactivated successfully. Jul 15 23:59:45.724651 systemd-networkd[1489]: cali484f30eef57: Gained IPv6LL Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.012 [INFO][5078] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-eth0 calico-kube-controllers-86d9bbf597- calico-system 60ea8394-7c5f-487c-b841-fc0d13d92798 870 0 2025-07-15 23:58:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86d9bbf597 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-86d9bbf597-qj6jm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4a8716727d0 [] [] }} ContainerID="2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" Namespace="calico-system" Pod="calico-kube-controllers-86d9bbf597-qj6jm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-" Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.013 [INFO][5078] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" Namespace="calico-system" Pod="calico-kube-controllers-86d9bbf597-qj6jm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-eth0" Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.069 [INFO][5098] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" HandleID="k8s-pod-network.2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" Workload="localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-eth0" Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.070 [INFO][5098] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" HandleID="k8s-pod-network.2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" Workload="localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003243e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-86d9bbf597-qj6jm", "timestamp":"2025-07-15 23:59:45.069719094 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.070 [INFO][5098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.070 [INFO][5098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.070 [INFO][5098] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.079 [INFO][5098] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" host="localhost" Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.087 [INFO][5098] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.094 [INFO][5098] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.098 [INFO][5098] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.103 [INFO][5098] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.103 [INFO][5098] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" host="localhost" Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.107 [INFO][5098] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.162 [INFO][5098] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" host="localhost" Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.390 [INFO][5098] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" host="localhost" Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.390 [INFO][5098] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" host="localhost" Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.391 [INFO][5098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:45.790481 containerd[1565]: 2025-07-15 23:59:45.391 [INFO][5098] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" HandleID="k8s-pod-network.2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" Workload="localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-eth0" Jul 15 23:59:45.791458 containerd[1565]: 2025-07-15 23:59:45.395 [INFO][5078] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" Namespace="calico-system" Pod="calico-kube-controllers-86d9bbf597-qj6jm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-eth0", GenerateName:"calico-kube-controllers-86d9bbf597-", Namespace:"calico-system", SelfLink:"", UID:"60ea8394-7c5f-487c-b841-fc0d13d92798", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d9bbf597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-86d9bbf597-qj6jm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a8716727d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:45.791458 containerd[1565]: 2025-07-15 23:59:45.395 [INFO][5078] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" Namespace="calico-system" Pod="calico-kube-controllers-86d9bbf597-qj6jm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-eth0" Jul 15 23:59:45.791458 containerd[1565]: 2025-07-15 23:59:45.395 [INFO][5078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a8716727d0 ContainerID="2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" Namespace="calico-system" Pod="calico-kube-controllers-86d9bbf597-qj6jm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-eth0" Jul 15 23:59:45.791458 containerd[1565]: 2025-07-15 23:59:45.399 [INFO][5078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" Namespace="calico-system" Pod="calico-kube-controllers-86d9bbf597-qj6jm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-eth0" Jul 15 23:59:45.791458 containerd[1565]: 2025-07-15 23:59:45.399 [INFO][5078] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" Namespace="calico-system" Pod="calico-kube-controllers-86d9bbf597-qj6jm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-eth0", GenerateName:"calico-kube-controllers-86d9bbf597-", Namespace:"calico-system", SelfLink:"", UID:"60ea8394-7c5f-487c-b841-fc0d13d92798", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d9bbf597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd", Pod:"calico-kube-controllers-86d9bbf597-qj6jm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a8716727d0", MAC:"06:a7:29:a9:a2:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:59:45.791458 containerd[1565]: 2025-07-15 23:59:45.785 [INFO][5078] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" Namespace="calico-system" Pod="calico-kube-controllers-86d9bbf597-qj6jm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d9bbf597--qj6jm-eth0" Jul 15 23:59:45.884887 containerd[1565]: time="2025-07-15T23:59:45.884821311Z" level=info msg="connecting to shim 2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd" address="unix:///run/containerd/s/6d4869cbb17dec946194a28d709fa0c28b34d4e6794492f65aa5f9a5b6179807" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:45.889690 containerd[1565]: time="2025-07-15T23:59:45.889630886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:45.890695 containerd[1565]: time="2025-07-15T23:59:45.890641015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 15 23:59:45.895869 containerd[1565]: time="2025-07-15T23:59:45.895785957Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:45.899626 containerd[1565]: time="2025-07-15T23:59:45.899570326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:45.900774 containerd[1565]: time="2025-07-15T23:59:45.900724893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.533409127s" Jul 15 23:59:45.900774 containerd[1565]: time="2025-07-15T23:59:45.900770169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 23:59:45.902733 containerd[1565]: time="2025-07-15T23:59:45.902490096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:59:45.904537 containerd[1565]: time="2025-07-15T23:59:45.904503900Z" level=info msg="CreateContainer within sandbox \"6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 23:59:45.916781 containerd[1565]: time="2025-07-15T23:59:45.916721955Z" level=info msg="Container d7323a8d2c94c2e801ed560a182dd5982a8edb5621ce5311f0e3b234ca524b7e: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:45.923629 systemd[1]: Started cri-containerd-2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd.scope - libcontainer container 2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd. Jul 15 23:59:45.933124 containerd[1565]: time="2025-07-15T23:59:45.931524022Z" level=info msg="CreateContainer within sandbox \"6feb2eee16a6e1e66d9616e3a7e8c2aad0083d9742c343b89cb3792f5923e001\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d7323a8d2c94c2e801ed560a182dd5982a8edb5621ce5311f0e3b234ca524b7e\"" Jul 15 23:59:45.936303 containerd[1565]: time="2025-07-15T23:59:45.936250037Z" level=info msg="StartContainer for \"d7323a8d2c94c2e801ed560a182dd5982a8edb5621ce5311f0e3b234ca524b7e\"" Jul 15 23:59:45.938053 containerd[1565]: time="2025-07-15T23:59:45.937977568Z" level=info msg="connecting to shim d7323a8d2c94c2e801ed560a182dd5982a8edb5621ce5311f0e3b234ca524b7e" address="unix:///run/containerd/s/d7a7aef03c2cf33f46ed13df3e6d49b2691529db9dd9d71af89d3b0e5d3e07a8" protocol=ttrpc version=3 Jul 15 23:59:45.956350 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:59:45.963800 systemd[1]: Started cri-containerd-d7323a8d2c94c2e801ed560a182dd5982a8edb5621ce5311f0e3b234ca524b7e.scope - libcontainer container d7323a8d2c94c2e801ed560a182dd5982a8edb5621ce5311f0e3b234ca524b7e. Jul 15 23:59:46.243463 containerd[1565]: time="2025-07-15T23:59:46.243284364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d9bbf597-qj6jm,Uid:60ea8394-7c5f-487c-b841-fc0d13d92798,Namespace:calico-system,Attempt:0,} returns sandbox id \"2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd\"" Jul 15 23:59:46.246123 containerd[1565]: time="2025-07-15T23:59:46.246089000Z" level=info msg="StartContainer for \"d7323a8d2c94c2e801ed560a182dd5982a8edb5621ce5311f0e3b234ca524b7e\" returns successfully" Jul 15 23:59:46.247623 kubelet[2702]: E0715 23:59:46.247411 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:46.875622 systemd-networkd[1489]: cali4a8716727d0: Gained IPv6LL Jul 15 23:59:47.253083 kubelet[2702]: E0715 23:59:47.252655 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:47.387200 containerd[1565]: time="2025-07-15T23:59:47.387140786Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7323a8d2c94c2e801ed560a182dd5982a8edb5621ce5311f0e3b234ca524b7e\" id:\"d1b87465044b5691debe6ec46eaf6302a081804e078a69d3d01ae0853b5de520\" pid:5216 exit_status:1 exited_at:{seconds:1752623987 nanos:386783236}" Jul 15 23:59:48.378075 containerd[1565]: time="2025-07-15T23:59:48.378003023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7323a8d2c94c2e801ed560a182dd5982a8edb5621ce5311f0e3b234ca524b7e\" id:\"2d29c4d88bfa59beed244dcc61b2b204990bc912995f5d62981a6bf4f7b064c3\" pid:5242 exit_status:1 exited_at:{seconds:1752623988 nanos:377672106}" Jul 15 23:59:49.366476 systemd[1]: Started sshd@10-10.0.0.136:22-10.0.0.1:56096.service - OpenSSH per-connection server daemon (10.0.0.1:56096). Jul 15 23:59:49.474107 sshd[5258]: Accepted publickey for core from 10.0.0.1 port 56096 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:59:49.476333 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:49.483733 systemd-logind[1548]: New session 11 of user core. Jul 15 23:59:49.488592 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:59:50.028152 sshd[5260]: Connection closed by 10.0.0.1 port 56096 Jul 15 23:59:50.028504 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:50.053966 systemd[1]: sshd@10-10.0.0.136:22-10.0.0.1:56096.service: Deactivated successfully. Jul 15 23:59:50.057177 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:59:50.058125 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:59:50.062029 systemd[1]: Started sshd@11-10.0.0.136:22-10.0.0.1:56112.service - OpenSSH per-connection server daemon (10.0.0.1:56112). Jul 15 23:59:50.062705 systemd-logind[1548]: Removed session 11. Jul 15 23:59:50.124123 sshd[5274]: Accepted publickey for core from 10.0.0.1 port 56112 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:59:50.126031 sshd-session[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:50.132271 systemd-logind[1548]: New session 12 of user core. Jul 15 23:59:50.141746 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:59:50.364909 sshd[5276]: Connection closed by 10.0.0.1 port 56112 Jul 15 23:59:50.365188 sshd-session[5274]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:50.375962 systemd[1]: sshd@11-10.0.0.136:22-10.0.0.1:56112.service: Deactivated successfully. Jul 15 23:59:50.378594 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:59:50.379532 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:59:50.384124 systemd[1]: Started sshd@12-10.0.0.136:22-10.0.0.1:56122.service - OpenSSH per-connection server daemon (10.0.0.1:56122). Jul 15 23:59:50.384847 systemd-logind[1548]: Removed session 12. Jul 15 23:59:50.484503 sshd[5287]: Accepted publickey for core from 10.0.0.1 port 56122 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:59:50.486362 sshd-session[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:50.491410 systemd-logind[1548]: New session 13 of user core. Jul 15 23:59:50.506530 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:59:50.684521 containerd[1565]: time="2025-07-15T23:59:50.683727498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:50.686192 containerd[1565]: time="2025-07-15T23:59:50.685560170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 15 23:59:50.688313 containerd[1565]: time="2025-07-15T23:59:50.688082908Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:50.688519 sshd[5289]: Connection closed by 10.0.0.1 port 56122 Jul 15 23:59:50.690255 sshd-session[5287]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:50.693716 containerd[1565]: time="2025-07-15T23:59:50.693660967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:50.694538 containerd[1565]: time="2025-07-15T23:59:50.694495412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.791957963s" Jul 15 23:59:50.694538 containerd[1565]: time="2025-07-15T23:59:50.694535147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 23:59:50.697689 containerd[1565]: time="2025-07-15T23:59:50.697526737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:59:50.699556 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:59:50.700021 systemd[1]: sshd@12-10.0.0.136:22-10.0.0.1:56122.service: Deactivated successfully. Jul 15 23:59:50.702108 containerd[1565]: time="2025-07-15T23:59:50.702045631Z" level=info msg="CreateContainer within sandbox \"6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:59:50.704123 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:59:50.707642 systemd-logind[1548]: Removed session 13. Jul 15 23:59:50.714118 containerd[1565]: time="2025-07-15T23:59:50.714037967Z" level=info msg="Container e71762168831e8fcef089a08f12ff828627073a03722f04781ef62a0045f3b3b: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:50.727302 containerd[1565]: time="2025-07-15T23:59:50.727220260Z" level=info msg="CreateContainer within sandbox \"6e7643839e47454ab3b892728c40f65799dbafca7f32d304478c798a22ce4227\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e71762168831e8fcef089a08f12ff828627073a03722f04781ef62a0045f3b3b\"" Jul 15 23:59:50.729182 containerd[1565]: time="2025-07-15T23:59:50.727789063Z" level=info msg="StartContainer for \"e71762168831e8fcef089a08f12ff828627073a03722f04781ef62a0045f3b3b\"" Jul 15 23:59:50.729472 containerd[1565]: time="2025-07-15T23:59:50.729419076Z" level=info msg="connecting to shim e71762168831e8fcef089a08f12ff828627073a03722f04781ef62a0045f3b3b" address="unix:///run/containerd/s/4e7aeee9d6f73bc6dea140d4f2b223d2cc05d4696541865df7a251003f9b723c" protocol=ttrpc version=3 Jul 15 23:59:50.760750 systemd[1]: Started cri-containerd-e71762168831e8fcef089a08f12ff828627073a03722f04781ef62a0045f3b3b.scope - libcontainer container e71762168831e8fcef089a08f12ff828627073a03722f04781ef62a0045f3b3b. Jul 15 23:59:50.862956 containerd[1565]: time="2025-07-15T23:59:50.862893332Z" level=info msg="StartContainer for \"e71762168831e8fcef089a08f12ff828627073a03722f04781ef62a0045f3b3b\" returns successfully" Jul 15 23:59:51.462535 kubelet[2702]: I0715 23:59:51.461102 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-m7vc8" podStartSLOduration=48.925719957 podStartE2EDuration="53.461083798s" podCreationTimestamp="2025-07-15 23:58:58 +0000 UTC" firstStartedPulling="2025-07-15 23:59:41.366727357 +0000 UTC m=+59.615180818" lastFinishedPulling="2025-07-15 23:59:45.902091198 +0000 UTC m=+64.150544659" observedRunningTime="2025-07-15 23:59:47.326002469 +0000 UTC m=+65.574455950" watchObservedRunningTime="2025-07-15 23:59:51.461083798 +0000 UTC m=+69.709537259" Jul 15 23:59:51.462535 kubelet[2702]: I0715 23:59:51.461311 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86f64d6979-5kwll" podStartSLOduration=47.240677952 podStartE2EDuration="56.461304272s" podCreationTimestamp="2025-07-15 23:58:55 +0000 UTC" firstStartedPulling="2025-07-15 23:59:41.475546515 +0000 UTC m=+59.723999976" lastFinishedPulling="2025-07-15 23:59:50.696172835 +0000 UTC m=+68.944626296" observedRunningTime="2025-07-15 23:59:51.460858236 +0000 UTC m=+69.709311697" watchObservedRunningTime="2025-07-15 23:59:51.461304272 +0000 UTC m=+69.709757733" Jul 15 23:59:52.263856 kubelet[2702]: I0715 23:59:52.263787 2702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:59:52.300140 containerd[1565]: time="2025-07-15T23:59:52.300064501Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:52.378118 containerd[1565]: time="2025-07-15T23:59:52.378013540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 23:59:52.380306 containerd[1565]: time="2025-07-15T23:59:52.380186280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.682624095s" Jul 15 23:59:52.380306 containerd[1565]: time="2025-07-15T23:59:52.380228882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 23:59:52.381986 containerd[1565]: time="2025-07-15T23:59:52.381644649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 23:59:52.383748 containerd[1565]: time="2025-07-15T23:59:52.383677540Z" level=info msg="CreateContainer within sandbox \"f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:59:52.401681 containerd[1565]: time="2025-07-15T23:59:52.401611802Z" level=info msg="Container 1f2d4279b46e1db5b5d83645135716d3d2116b4f16acfa8f043aa73dc46e8c79: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:52.412501 containerd[1565]: time="2025-07-15T23:59:52.412438522Z" level=info msg="CreateContainer within sandbox \"f15e5f491a9dc2ff52b148044bd576f0fa6e795a10f60d247f8326b5953dcf8f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1f2d4279b46e1db5b5d83645135716d3d2116b4f16acfa8f043aa73dc46e8c79\"" Jul 15 23:59:52.414195 containerd[1565]: time="2025-07-15T23:59:52.413028834Z" level=info msg="StartContainer for \"1f2d4279b46e1db5b5d83645135716d3d2116b4f16acfa8f043aa73dc46e8c79\"" Jul 15 23:59:52.414357 containerd[1565]: time="2025-07-15T23:59:52.414328079Z" level=info msg="connecting to shim 1f2d4279b46e1db5b5d83645135716d3d2116b4f16acfa8f043aa73dc46e8c79" address="unix:///run/containerd/s/a7b08ab7ac197627b07075af2ea4a6d9fb2c0268e56e1c208c685a0a9a61ade4" protocol=ttrpc version=3 Jul 15 23:59:52.442674 systemd[1]: Started cri-containerd-1f2d4279b46e1db5b5d83645135716d3d2116b4f16acfa8f043aa73dc46e8c79.scope - libcontainer container 1f2d4279b46e1db5b5d83645135716d3d2116b4f16acfa8f043aa73dc46e8c79. Jul 15 23:59:52.544919 containerd[1565]: time="2025-07-15T23:59:52.544372474Z" level=info msg="StartContainer for \"1f2d4279b46e1db5b5d83645135716d3d2116b4f16acfa8f043aa73dc46e8c79\" returns successfully" Jul 15 23:59:53.597490 kubelet[2702]: I0715 23:59:53.597426 2702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:59:53.828418 kubelet[2702]: I0715 23:59:53.827770 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86f64d6979-2m69b" podStartSLOduration=51.064599084 podStartE2EDuration="58.827742715s" podCreationTimestamp="2025-07-15 23:58:55 +0000 UTC" firstStartedPulling="2025-07-15 23:59:44.618071383 +0000 UTC m=+62.866524844" lastFinishedPulling="2025-07-15 23:59:52.381215004 +0000 UTC m=+70.629668475" observedRunningTime="2025-07-15 23:59:53.416931209 +0000 UTC m=+71.665384700" watchObservedRunningTime="2025-07-15 23:59:53.827742715 +0000 UTC m=+72.076196176" Jul 15 23:59:54.857310 kubelet[2702]: E0715 23:59:54.857246 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:59:55.709236 systemd[1]: Started sshd@13-10.0.0.136:22-10.0.0.1:56132.service - OpenSSH per-connection server daemon (10.0.0.1:56132). Jul 15 23:59:56.457602 sshd[5410]: Accepted publickey for core from 10.0.0.1 port 56132 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:59:56.460017 sshd-session[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:56.465682 systemd-logind[1548]: New session 14 of user core. Jul 15 23:59:56.474565 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:59:56.608356 sshd[5414]: Connection closed by 10.0.0.1 port 56132 Jul 15 23:59:56.608752 sshd-session[5410]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:56.614303 systemd[1]: sshd@13-10.0.0.136:22-10.0.0.1:56132.service: Deactivated successfully. Jul 15 23:59:56.616933 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:59:56.617973 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:59:56.619278 systemd-logind[1548]: Removed session 14. Jul 15 23:59:56.870516 containerd[1565]: time="2025-07-15T23:59:56.870282803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:56.883550 containerd[1565]: time="2025-07-15T23:59:56.883474143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 15 23:59:56.946586 containerd[1565]: time="2025-07-15T23:59:56.946512612Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:56.981680 containerd[1565]: time="2025-07-15T23:59:56.981598957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:56.982298 containerd[1565]: time="2025-07-15T23:59:56.982264201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.600526032s" Jul 15 23:59:56.982298 containerd[1565]: time="2025-07-15T23:59:56.982295551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 23:59:56.991843 containerd[1565]: time="2025-07-15T23:59:56.991758180Z" level=info msg="CreateContainer within sandbox \"2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 23:59:57.061179 containerd[1565]: time="2025-07-15T23:59:57.060717907Z" level=info msg="Container 8ab89f4f5897ea3b6cde6d7a20454a96070a5561033c7f05cd8a68b5e2be7904: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:57.077172 containerd[1565]: time="2025-07-15T23:59:57.077096233Z" level=info msg="CreateContainer within sandbox \"2bd0b194407098b4434b7dbd72cc9a51fa687d5b725748e5e02a439997893dfd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8ab89f4f5897ea3b6cde6d7a20454a96070a5561033c7f05cd8a68b5e2be7904\"" Jul 15 23:59:57.078995 containerd[1565]: time="2025-07-15T23:59:57.077688726Z" level=info msg="StartContainer for \"8ab89f4f5897ea3b6cde6d7a20454a96070a5561033c7f05cd8a68b5e2be7904\"" Jul 15 23:59:57.079253 containerd[1565]: time="2025-07-15T23:59:57.079220688Z" level=info msg="connecting to shim 8ab89f4f5897ea3b6cde6d7a20454a96070a5561033c7f05cd8a68b5e2be7904" address="unix:///run/containerd/s/6d4869cbb17dec946194a28d709fa0c28b34d4e6794492f65aa5f9a5b6179807" protocol=ttrpc version=3 Jul 15 23:59:57.109532 systemd[1]: Started cri-containerd-8ab89f4f5897ea3b6cde6d7a20454a96070a5561033c7f05cd8a68b5e2be7904.scope - libcontainer container 8ab89f4f5897ea3b6cde6d7a20454a96070a5561033c7f05cd8a68b5e2be7904. Jul 15 23:59:57.162018 containerd[1565]: time="2025-07-15T23:59:57.161880575Z" level=info msg="StartContainer for \"8ab89f4f5897ea3b6cde6d7a20454a96070a5561033c7f05cd8a68b5e2be7904\" returns successfully" Jul 15 23:59:57.292680 kubelet[2702]: I0715 23:59:57.292603 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86d9bbf597-qj6jm" podStartSLOduration=47.554434613 podStartE2EDuration="58.292584169s" podCreationTimestamp="2025-07-15 23:58:59 +0000 UTC" firstStartedPulling="2025-07-15 23:59:46.244873216 +0000 UTC m=+64.493326678" lastFinishedPulling="2025-07-15 23:59:56.983022773 +0000 UTC m=+75.231476234" observedRunningTime="2025-07-15 23:59:57.290926867 +0000 UTC m=+75.539380328" watchObservedRunningTime="2025-07-15 23:59:57.292584169 +0000 UTC m=+75.541037630" Jul 15 23:59:57.331587 containerd[1565]: time="2025-07-15T23:59:57.331516959Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ab89f4f5897ea3b6cde6d7a20454a96070a5561033c7f05cd8a68b5e2be7904\" id:\"bf873c9ff2a2950100eb6f5d792e0dbd062cde2a093e6e6ea660a51116a10bda\" pid:5483 exited_at:{seconds:1752623997 nanos:331187959}" Jul 15 23:59:57.857512 kubelet[2702]: E0715 23:59:57.857446 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 16 00:00:01.628107 systemd[1]: Started sshd@14-10.0.0.136:22-10.0.0.1:42942.service - OpenSSH per-connection server daemon (10.0.0.1:42942). Jul 16 00:00:01.707352 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 42942 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:01.709923 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:01.716505 systemd-logind[1548]: New session 15 of user core. Jul 16 00:00:01.726878 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 16 00:00:01.911187 sshd[5496]: Connection closed by 10.0.0.1 port 42942 Jul 16 00:00:01.911683 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:01.920837 systemd[1]: sshd@14-10.0.0.136:22-10.0.0.1:42942.service: Deactivated successfully. Jul 16 00:00:01.923784 systemd[1]: session-15.scope: Deactivated successfully. Jul 16 00:00:01.925318 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Jul 16 00:00:01.928066 systemd-logind[1548]: Removed session 15. Jul 16 00:00:02.138128 containerd[1565]: time="2025-07-16T00:00:02.138064604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13cf351bbaea3a8f5cddd4e7ceaf52ffe44144fea7b93c45cb5b259d254b7691\" id:\"a0bd81af70ede173a5f1cb98a13a27ce15563e4398fd2091736070bb907584fc\" pid:5521 exited_at:{seconds:1752624002 nanos:137529742}" Jul 16 00:00:06.857571 kubelet[2702]: E0716 00:00:06.857514 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 16 00:00:06.928791 systemd[1]: Started sshd@15-10.0.0.136:22-10.0.0.1:42948.service - OpenSSH per-connection server daemon (10.0.0.1:42948). Jul 16 00:00:06.988092 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 42948 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:06.989812 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:06.994288 systemd-logind[1548]: New session 16 of user core. Jul 16 00:00:07.000509 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 16 00:00:07.128932 sshd[5539]: Connection closed by 10.0.0.1 port 42948 Jul 16 00:00:07.129190 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:07.134113 systemd[1]: sshd@15-10.0.0.136:22-10.0.0.1:42948.service: Deactivated successfully. Jul 16 00:00:07.136334 systemd[1]: session-16.scope: Deactivated successfully. Jul 16 00:00:07.137148 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Jul 16 00:00:07.138539 systemd-logind[1548]: Removed session 16. Jul 16 00:00:07.857063 kubelet[2702]: E0716 00:00:07.856974 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 16 00:00:12.148705 systemd[1]: Started sshd@16-10.0.0.136:22-10.0.0.1:52100.service - OpenSSH per-connection server daemon (10.0.0.1:52100). Jul 16 00:00:12.209670 sshd[5552]: Accepted publickey for core from 10.0.0.1 port 52100 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:12.211466 sshd-session[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:12.216507 systemd-logind[1548]: New session 17 of user core. Jul 16 00:00:12.224527 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 16 00:00:12.344033 sshd[5554]: Connection closed by 10.0.0.1 port 52100 Jul 16 00:00:12.344351 sshd-session[5552]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:12.349798 systemd[1]: sshd@16-10.0.0.136:22-10.0.0.1:52100.service: Deactivated successfully. Jul 16 00:00:12.351869 systemd[1]: session-17.scope: Deactivated successfully. Jul 16 00:00:12.353109 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Jul 16 00:00:12.355093 systemd-logind[1548]: Removed session 17. Jul 16 00:00:17.358607 systemd[1]: Started sshd@17-10.0.0.136:22-10.0.0.1:52106.service - OpenSSH per-connection server daemon (10.0.0.1:52106). Jul 16 00:00:17.418181 sshd[5573]: Accepted publickey for core from 10.0.0.1 port 52106 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:17.419743 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:17.424654 systemd-logind[1548]: New session 18 of user core. Jul 16 00:00:17.434568 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 16 00:00:17.572249 sshd[5575]: Connection closed by 10.0.0.1 port 52106 Jul 16 00:00:17.572614 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:17.576489 systemd[1]: sshd@17-10.0.0.136:22-10.0.0.1:52106.service: Deactivated successfully. Jul 16 00:00:17.578909 systemd[1]: session-18.scope: Deactivated successfully. Jul 16 00:00:17.581361 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Jul 16 00:00:17.582794 systemd-logind[1548]: Removed session 18. Jul 16 00:00:17.758609 containerd[1565]: time="2025-07-16T00:00:17.758542696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ab89f4f5897ea3b6cde6d7a20454a96070a5561033c7f05cd8a68b5e2be7904\" id:\"37371dc956d127715d66e0caba089dd74fa0afca1e60c551ccae5ffcd72ecd1e\" pid:5598 exited_at:{seconds:1752624017 nanos:758069016}" Jul 16 00:00:18.342861 containerd[1565]: time="2025-07-16T00:00:18.342790333Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7323a8d2c94c2e801ed560a182dd5982a8edb5621ce5311f0e3b234ca524b7e\" id:\"c8f0958a6c6b8b623401d2c2e6e60b1e47a87c4459f16f67199ea0a22d569e1c\" pid:5621 exited_at:{seconds:1752624018 nanos:342424829}" Jul 16 00:00:22.595311 systemd[1]: Started sshd@18-10.0.0.136:22-10.0.0.1:59246.service - OpenSSH per-connection server daemon (10.0.0.1:59246). Jul 16 00:00:22.807410 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 59246 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:22.809145 sshd-session[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:22.813753 systemd-logind[1548]: New session 19 of user core. Jul 16 00:00:22.822611 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 16 00:00:23.019596 sshd[5637]: Connection closed by 10.0.0.1 port 59246 Jul 16 00:00:23.019988 sshd-session[5635]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:23.026872 systemd[1]: sshd@18-10.0.0.136:22-10.0.0.1:59246.service: Deactivated successfully. Jul 16 00:00:23.030063 systemd[1]: session-19.scope: Deactivated successfully. Jul 16 00:00:23.032569 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Jul 16 00:00:23.034309 systemd-logind[1548]: Removed session 19. Jul 16 00:00:27.360236 containerd[1565]: time="2025-07-16T00:00:27.360180848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ab89f4f5897ea3b6cde6d7a20454a96070a5561033c7f05cd8a68b5e2be7904\" id:\"e9bf7d80cf068446657e8ba343f9a5731b884069385caeb0cdf28bf6deba7790\" pid:5662 exited_at:{seconds:1752624027 nanos:359820395}" Jul 16 00:00:28.040120 systemd[1]: Started sshd@19-10.0.0.136:22-10.0.0.1:54608.service - OpenSSH per-connection server daemon (10.0.0.1:54608). Jul 16 00:00:28.267885 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 54608 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:28.271316 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:28.280909 systemd-logind[1548]: New session 20 of user core. Jul 16 00:00:28.288899 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 16 00:00:28.569231 sshd[5676]: Connection closed by 10.0.0.1 port 54608 Jul 16 00:00:28.569686 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:28.576013 systemd[1]: sshd@19-10.0.0.136:22-10.0.0.1:54608.service: Deactivated successfully. Jul 16 00:00:28.578551 systemd[1]: session-20.scope: Deactivated successfully. Jul 16 00:00:28.579676 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Jul 16 00:00:28.582128 systemd-logind[1548]: Removed session 20. Jul 16 00:00:32.125754 containerd[1565]: time="2025-07-16T00:00:32.125672431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13cf351bbaea3a8f5cddd4e7ceaf52ffe44144fea7b93c45cb5b259d254b7691\" id:\"8656692527eee72c55a9e0b325b4d7359c89d1b04f599abf78d0d98ef6c88eef\" pid:5701 exited_at:{seconds:1752624032 nanos:125243839}" Jul 16 00:00:33.582902 systemd[1]: Started sshd@20-10.0.0.136:22-10.0.0.1:54626.service - OpenSSH per-connection server daemon (10.0.0.1:54626). Jul 16 00:00:33.669794 sshd[5714]: Accepted publickey for core from 10.0.0.1 port 54626 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:33.672191 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:33.678374 systemd-logind[1548]: New session 21 of user core. Jul 16 00:00:33.688652 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 16 00:00:33.832513 sshd[5716]: Connection closed by 10.0.0.1 port 54626 Jul 16 00:00:33.832980 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:33.842838 systemd[1]: sshd@20-10.0.0.136:22-10.0.0.1:54626.service: Deactivated successfully. Jul 16 00:00:33.845478 systemd[1]: session-21.scope: Deactivated successfully. Jul 16 00:00:33.846358 systemd-logind[1548]: Session 21 logged out. Waiting for processes to exit. Jul 16 00:00:33.849884 systemd[1]: Started sshd@21-10.0.0.136:22-10.0.0.1:54636.service - OpenSSH per-connection server daemon (10.0.0.1:54636). Jul 16 00:00:33.850998 systemd-logind[1548]: Removed session 21. Jul 16 00:00:33.898989 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 54636 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:33.900983 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:33.906046 systemd-logind[1548]: New session 22 of user core. Jul 16 00:00:33.912524 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 16 00:00:34.388559 sshd[5731]: Connection closed by 10.0.0.1 port 54636 Jul 16 00:00:34.388991 sshd-session[5729]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:34.406958 systemd[1]: sshd@21-10.0.0.136:22-10.0.0.1:54636.service: Deactivated successfully. Jul 16 00:00:34.409522 systemd[1]: session-22.scope: Deactivated successfully. Jul 16 00:00:34.410717 systemd-logind[1548]: Session 22 logged out. Waiting for processes to exit. Jul 16 00:00:34.414989 systemd[1]: Started sshd@22-10.0.0.136:22-10.0.0.1:54644.service - OpenSSH per-connection server daemon (10.0.0.1:54644). Jul 16 00:00:34.415998 systemd-logind[1548]: Removed session 22. Jul 16 00:00:34.481011 sshd[5742]: Accepted publickey for core from 10.0.0.1 port 54644 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:34.483071 sshd-session[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:34.488892 systemd-logind[1548]: New session 23 of user core. Jul 16 00:00:34.497701 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 16 00:00:34.856958 kubelet[2702]: E0716 00:00:34.856897 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 16 00:00:35.577590 sshd[5745]: Connection closed by 10.0.0.1 port 54644 Jul 16 00:00:35.578419 sshd-session[5742]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:35.592362 systemd[1]: sshd@22-10.0.0.136:22-10.0.0.1:54644.service: Deactivated successfully. Jul 16 00:00:35.595605 systemd[1]: session-23.scope: Deactivated successfully. Jul 16 00:00:35.597678 systemd-logind[1548]: Session 23 logged out. Waiting for processes to exit. Jul 16 00:00:35.602634 systemd[1]: Started sshd@23-10.0.0.136:22-10.0.0.1:54656.service - OpenSSH per-connection server daemon (10.0.0.1:54656). Jul 16 00:00:35.605058 systemd-logind[1548]: Removed session 23. Jul 16 00:00:35.675080 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 54656 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:35.676333 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:35.682000 systemd-logind[1548]: New session 24 of user core. Jul 16 00:00:35.697678 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 16 00:00:36.100080 sshd[5768]: Connection closed by 10.0.0.1 port 54656 Jul 16 00:00:36.104797 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:36.117615 systemd[1]: sshd@23-10.0.0.136:22-10.0.0.1:54656.service: Deactivated successfully. Jul 16 00:00:36.122561 systemd[1]: session-24.scope: Deactivated successfully. Jul 16 00:00:36.128008 systemd-logind[1548]: Session 24 logged out. Waiting for processes to exit. Jul 16 00:00:36.133761 systemd[1]: Started sshd@24-10.0.0.136:22-10.0.0.1:54696.service - OpenSSH per-connection server daemon (10.0.0.1:54696). Jul 16 00:00:36.140498 systemd-logind[1548]: Removed session 24. Jul 16 00:00:36.198571 sshd[5780]: Accepted publickey for core from 10.0.0.1 port 54696 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:36.200446 sshd-session[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:36.207465 systemd-logind[1548]: New session 25 of user core. Jul 16 00:00:36.213654 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 16 00:00:36.357406 sshd[5782]: Connection closed by 10.0.0.1 port 54696 Jul 16 00:00:36.358092 sshd-session[5780]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:36.362189 systemd[1]: sshd@24-10.0.0.136:22-10.0.0.1:54696.service: Deactivated successfully. Jul 16 00:00:36.364688 systemd[1]: session-25.scope: Deactivated successfully. Jul 16 00:00:36.368055 systemd-logind[1548]: Session 25 logged out. Waiting for processes to exit. Jul 16 00:00:36.369338 systemd-logind[1548]: Removed session 25. Jul 16 00:00:41.380480 systemd[1]: Started sshd@25-10.0.0.136:22-10.0.0.1:43152.service - OpenSSH per-connection server daemon (10.0.0.1:43152). Jul 16 00:00:41.444338 sshd[5796]: Accepted publickey for core from 10.0.0.1 port 43152 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:41.446821 sshd-session[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:41.454316 systemd-logind[1548]: New session 26 of user core. Jul 16 00:00:41.459571 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 16 00:00:41.493487 containerd[1565]: time="2025-07-16T00:00:41.493424724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7323a8d2c94c2e801ed560a182dd5982a8edb5621ce5311f0e3b234ca524b7e\" id:\"7c4abef5bbcd78c95d50bb8c00b650f720b32af9e5f8c9b493dccb35ac4b4711\" pid:5812 exited_at:{seconds:1752624041 nanos:492994581}" Jul 16 00:00:41.585304 sshd[5822]: Connection closed by 10.0.0.1 port 43152 Jul 16 00:00:41.585699 sshd-session[5796]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:41.590344 systemd-logind[1548]: Session 26 logged out. Waiting for processes to exit. Jul 16 00:00:41.590788 systemd[1]: sshd@25-10.0.0.136:22-10.0.0.1:43152.service: Deactivated successfully. Jul 16 00:00:41.594003 systemd[1]: session-26.scope: Deactivated successfully. Jul 16 00:00:41.598748 systemd-logind[1548]: Removed session 26. Jul 16 00:00:46.599079 systemd[1]: Started sshd@26-10.0.0.136:22-10.0.0.1:43198.service - OpenSSH per-connection server daemon (10.0.0.1:43198). Jul 16 00:00:46.658793 sshd[5842]: Accepted publickey for core from 10.0.0.1 port 43198 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:46.660632 sshd-session[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:46.665792 systemd-logind[1548]: New session 27 of user core. Jul 16 00:00:46.672041 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 16 00:00:46.792701 sshd[5844]: Connection closed by 10.0.0.1 port 43198 Jul 16 00:00:46.793084 sshd-session[5842]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:46.798511 systemd[1]: sshd@26-10.0.0.136:22-10.0.0.1:43198.service: Deactivated successfully. Jul 16 00:00:46.800968 systemd[1]: session-27.scope: Deactivated successfully. Jul 16 00:00:46.802192 systemd-logind[1548]: Session 27 logged out. Waiting for processes to exit. Jul 16 00:00:46.804282 systemd-logind[1548]: Removed session 27. Jul 16 00:00:48.339025 containerd[1565]: time="2025-07-16T00:00:48.338965213Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7323a8d2c94c2e801ed560a182dd5982a8edb5621ce5311f0e3b234ca524b7e\" id:\"dc8b694095e19cdb86d3e9a0e02e6c0b3bbef3c109114e536d8d4b378b38f834\" pid:5872 exited_at:{seconds:1752624048 nanos:338446664}" Jul 16 00:00:51.814937 systemd[1]: Started sshd@27-10.0.0.136:22-10.0.0.1:60466.service - OpenSSH per-connection server daemon (10.0.0.1:60466). Jul 16 00:00:51.860044 kubelet[2702]: E0716 00:00:51.860008 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 16 00:00:51.874108 sshd[5885]: Accepted publickey for core from 10.0.0.1 port 60466 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:51.876491 sshd-session[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:51.883066 systemd-logind[1548]: New session 28 of user core. Jul 16 00:00:51.891622 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 16 00:00:52.072725 sshd[5887]: Connection closed by 10.0.0.1 port 60466 Jul 16 00:00:52.074614 sshd-session[5885]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:52.079692 systemd-logind[1548]: Session 28 logged out. Waiting for processes to exit. Jul 16 00:00:52.081478 systemd[1]: sshd@27-10.0.0.136:22-10.0.0.1:60466.service: Deactivated successfully. Jul 16 00:00:52.085529 systemd[1]: session-28.scope: Deactivated successfully. Jul 16 00:00:52.091131 systemd-logind[1548]: Removed session 28. Jul 16 00:00:57.090631 systemd[1]: Started sshd@28-10.0.0.136:22-10.0.0.1:60478.service - OpenSSH per-connection server daemon (10.0.0.1:60478). Jul 16 00:00:57.166540 sshd[5909]: Accepted publickey for core from 10.0.0.1 port 60478 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:57.167357 sshd-session[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:57.178412 systemd-logind[1548]: New session 29 of user core. Jul 16 00:00:57.187238 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 16 00:00:57.328880 containerd[1565]: time="2025-07-16T00:00:57.328815471Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ab89f4f5897ea3b6cde6d7a20454a96070a5561033c7f05cd8a68b5e2be7904\" id:\"c550bbd07187939b44f7514a491524180b4855b30d8ae6fe4f99699cabc258d5\" pid:5932 exited_at:{seconds:1752624057 nanos:328246768}" Jul 16 00:00:57.376644 sshd[5911]: Connection closed by 10.0.0.1 port 60478 Jul 16 00:00:57.377260 sshd-session[5909]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:57.386238 systemd[1]: sshd@28-10.0.0.136:22-10.0.0.1:60478.service: Deactivated successfully. Jul 16 00:00:57.389312 systemd[1]: session-29.scope: Deactivated successfully. Jul 16 00:00:57.391698 systemd-logind[1548]: Session 29 logged out. Waiting for processes to exit. Jul 16 00:00:57.393764 systemd-logind[1548]: Removed session 29. Jul 16 00:00:57.857367 kubelet[2702]: E0716 00:00:57.857312 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"