Jul 10 00:39:22.817055 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:39:22.817075 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:39:22.817083 kernel: BIOS-provided physical RAM map: Jul 10 00:39:22.817091 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jul 10 00:39:22.817096 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jul 10 00:39:22.817101 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 10 00:39:22.817108 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jul 10 00:39:22.817113 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jul 10 00:39:22.817119 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 10 00:39:22.817124 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 10 00:39:22.817130 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:39:22.817135 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 10 00:39:22.817142 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jul 10 00:39:22.817148 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 00:39:22.817155 kernel: NX (Execute Disable) protection: active Jul 10 00:39:22.817161 kernel: APIC: Static calls initialized Jul 10 00:39:22.817166 kernel: SMBIOS 2.8 present. Jul 10 00:39:22.817174 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jul 10 00:39:22.817180 kernel: DMI: Memory slots populated: 1/1 Jul 10 00:39:22.817186 kernel: Hypervisor detected: KVM Jul 10 00:39:22.817192 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 00:39:22.817197 kernel: kvm-clock: using sched offset of 5411120703 cycles Jul 10 00:39:22.817203 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 00:39:22.817210 kernel: tsc: Detected 1999.998 MHz processor Jul 10 00:39:22.817216 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:39:22.817222 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:39:22.817228 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jul 10 00:39:22.817236 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 10 00:39:22.817242 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:39:22.817248 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jul 10 00:39:22.817254 kernel: Using GB pages for direct mapping Jul 10 00:39:22.817260 kernel: ACPI: Early table checksum verification disabled Jul 10 00:39:22.817266 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jul 10 00:39:22.817272 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:39:22.817278 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:39:22.817284 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:39:22.817291 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 10 00:39:22.817297 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:39:22.817303 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:39:22.817309 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:39:22.817318 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:39:22.817324 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jul 10 00:39:22.817332 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jul 10 00:39:22.817339 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 10 00:39:22.817345 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jul 10 00:39:22.817351 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jul 10 00:39:22.817357 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jul 10 00:39:22.817363 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jul 10 00:39:22.817369 kernel: No NUMA configuration found Jul 10 00:39:22.817376 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jul 10 00:39:22.817384 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jul 10 00:39:22.817390 kernel: Zone ranges: Jul 10 00:39:22.817396 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:39:22.817402 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 10 00:39:22.817408 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jul 10 00:39:22.817415 kernel: Device empty Jul 10 00:39:22.817421 kernel: Movable zone start for each node Jul 10 00:39:22.817427 kernel: Early memory node ranges Jul 10 00:39:22.817433 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 10 00:39:22.817439 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jul 10 00:39:22.817447 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jul 10 00:39:22.817453 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jul 10 00:39:22.817459 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:39:22.817465 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 10 00:39:22.817472 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 10 00:39:22.817478 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 00:39:22.817484 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 00:39:22.817490 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:39:22.817497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 00:39:22.817504 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 00:39:22.817511 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:39:22.817517 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 00:39:22.817523 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 00:39:22.817529 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:39:22.817536 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 00:39:22.817542 kernel: TSC deadline timer available Jul 10 00:39:22.817548 kernel: CPU topo: Max. logical packages: 1 Jul 10 00:39:22.817554 kernel: CPU topo: Max. logical dies: 1 Jul 10 00:39:22.817562 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:39:22.817568 kernel: CPU topo: Max. threads per core: 1 Jul 10 00:39:22.817574 kernel: CPU topo: Num. cores per package: 2 Jul 10 00:39:22.817580 kernel: CPU topo: Num. threads per package: 2 Jul 10 00:39:22.817586 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 10 00:39:22.817593 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 10 00:39:22.817599 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 10 00:39:22.817605 kernel: kvm-guest: setup PV sched yield Jul 10 00:39:22.817611 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 10 00:39:22.817619 kernel: Booting paravirtualized kernel on KVM Jul 10 00:39:22.817625 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:39:22.817631 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 10 00:39:22.817638 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 10 00:39:22.817644 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 10 00:39:22.817650 kernel: pcpu-alloc: [0] 0 1 Jul 10 00:39:22.817656 kernel: kvm-guest: PV spinlocks enabled Jul 10 00:39:22.817662 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:39:22.817669 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:39:22.817678 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:39:22.817684 kernel: random: crng init done Jul 10 00:39:22.817690 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:39:22.817697 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:39:22.817703 kernel: Fallback order for Node 0: 0 Jul 10 00:39:22.817709 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 10 00:39:22.817715 kernel: Policy zone: Normal Jul 10 00:39:22.817721 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:39:22.817729 kernel: software IO TLB: area num 2. Jul 10 00:39:22.817736 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 00:39:22.817742 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:39:22.817748 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:39:22.817754 kernel: Dynamic Preempt: voluntary Jul 10 00:39:22.817760 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:39:22.817767 kernel: rcu: RCU event tracing is enabled. Jul 10 00:39:22.817774 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 00:39:22.817780 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:39:22.817786 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:39:22.817794 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:39:22.817800 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:39:22.817806 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 00:39:22.819829 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:39:22.819850 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:39:22.819860 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:39:22.819867 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 10 00:39:22.819874 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:39:22.819880 kernel: Console: colour VGA+ 80x25 Jul 10 00:39:22.819887 kernel: printk: legacy console [tty0] enabled Jul 10 00:39:22.819894 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:39:22.819900 kernel: ACPI: Core revision 20240827 Jul 10 00:39:22.819909 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 10 00:39:22.819915 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:39:22.819922 kernel: x2apic enabled Jul 10 00:39:22.819928 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:39:22.819937 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 10 00:39:22.819944 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 10 00:39:22.819950 kernel: kvm-guest: setup PV IPIs Jul 10 00:39:22.819957 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 00:39:22.819963 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Jul 10 00:39:22.819970 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999998) Jul 10 00:39:22.819976 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 00:39:22.819983 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 10 00:39:22.819990 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 10 00:39:22.819998 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:39:22.820004 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:39:22.820011 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:39:22.820017 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 10 00:39:22.820024 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 00:39:22.820031 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 10 00:39:22.820037 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 10 00:39:22.820044 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 10 00:39:22.820051 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 10 00:39:22.820059 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:39:22.820066 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:39:22.820072 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:39:22.820079 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 10 00:39:22.820086 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:39:22.820092 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jul 10 00:39:22.820099 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jul 10 00:39:22.820105 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:39:22.820113 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:39:22.820120 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:39:22.820126 kernel: landlock: Up and running. Jul 10 00:39:22.820133 kernel: SELinux: Initializing. Jul 10 00:39:22.820139 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:39:22.820146 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:39:22.820153 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jul 10 00:39:22.820159 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 10 00:39:22.820166 kernel: ... version: 0 Jul 10 00:39:22.820174 kernel: ... bit width: 48 Jul 10 00:39:22.820180 kernel: ... generic registers: 6 Jul 10 00:39:22.820186 kernel: ... value mask: 0000ffffffffffff Jul 10 00:39:22.820193 kernel: ... max period: 00007fffffffffff Jul 10 00:39:22.820199 kernel: ... fixed-purpose events: 0 Jul 10 00:39:22.820206 kernel: ... event mask: 000000000000003f Jul 10 00:39:22.820212 kernel: signal: max sigframe size: 3376 Jul 10 00:39:22.820219 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:39:22.820226 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:39:22.820232 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:39:22.820241 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:39:22.820247 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:39:22.820254 kernel: .... node #0, CPUs: #1 Jul 10 00:39:22.820260 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 00:39:22.820267 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Jul 10 00:39:22.820274 kernel: Memory: 3961808K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 227288K reserved, 0K cma-reserved) Jul 10 00:39:22.820280 kernel: devtmpfs: initialized Jul 10 00:39:22.820287 kernel: x86/mm: Memory block size: 128MB Jul 10 00:39:22.820293 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:39:22.820301 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 00:39:22.820308 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:39:22.820314 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:39:22.820321 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:39:22.820327 kernel: audit: type=2000 audit(1752107961.159:1): state=initialized audit_enabled=0 res=1 Jul 10 00:39:22.820334 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:39:22.820341 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:39:22.820347 kernel: cpuidle: using governor menu Jul 10 00:39:22.820354 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:39:22.820362 kernel: dca service started, version 1.12.1 Jul 10 00:39:22.820368 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 10 00:39:22.820375 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 10 00:39:22.820381 kernel: PCI: Using configuration type 1 for base access Jul 10 00:39:22.820388 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:39:22.820395 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:39:22.820401 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:39:22.820408 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:39:22.820416 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:39:22.820422 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:39:22.820429 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:39:22.820435 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:39:22.820442 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:39:22.820448 kernel: ACPI: Interpreter enabled Jul 10 00:39:22.820455 kernel: ACPI: PM: (supports S0 S3 S5) Jul 10 00:39:22.820461 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:39:22.820468 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:39:22.820476 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 00:39:22.820482 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 10 00:39:22.820489 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:39:22.820647 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:39:22.820760 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 10 00:39:22.820901 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 10 00:39:22.820912 kernel: PCI host bridge to bus 0000:00 Jul 10 00:39:22.821019 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:39:22.821121 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 00:39:22.821216 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:39:22.821310 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 10 00:39:22.821403 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 10 00:39:22.821496 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jul 10 00:39:22.821589 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:39:22.821713 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 10 00:39:22.825876 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 10 00:39:22.826003 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 10 00:39:22.826112 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 10 00:39:22.826217 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 10 00:39:22.826321 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:39:22.826433 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jul 10 00:39:22.826544 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jul 10 00:39:22.826649 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 10 00:39:22.826754 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 10 00:39:22.826891 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 00:39:22.827000 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jul 10 00:39:22.827104 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 10 00:39:22.827208 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 10 00:39:22.827317 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 10 00:39:22.827431 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 10 00:39:22.827535 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 10 00:39:22.827645 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 10 00:39:22.827748 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jul 10 00:39:22.828356 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jul 10 00:39:22.828483 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 10 00:39:22.828602 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 10 00:39:22.828612 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 00:39:22.828620 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 00:39:22.828626 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:39:22.828633 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 00:39:22.828640 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 10 00:39:22.828646 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 10 00:39:22.828656 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 10 00:39:22.828662 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 10 00:39:22.828668 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 10 00:39:22.828675 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 10 00:39:22.828681 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 10 00:39:22.828688 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 10 00:39:22.828694 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 10 00:39:22.828701 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 10 00:39:22.828707 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 10 00:39:22.828716 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 10 00:39:22.828722 kernel: iommu: Default domain type: Translated Jul 10 00:39:22.828729 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:39:22.828735 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:39:22.828742 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:39:22.828748 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jul 10 00:39:22.828755 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jul 10 00:39:22.828889 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 10 00:39:22.828995 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 10 00:39:22.829103 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:39:22.829112 kernel: vgaarb: loaded Jul 10 00:39:22.829119 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 10 00:39:22.829126 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 10 00:39:22.829132 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 00:39:22.829139 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:39:22.829145 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:39:22.829152 kernel: pnp: PnP ACPI init Jul 10 00:39:22.829268 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 10 00:39:22.829279 kernel: pnp: PnP ACPI: found 5 devices Jul 10 00:39:22.829286 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:39:22.829292 kernel: NET: Registered PF_INET protocol family Jul 10 00:39:22.829299 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:39:22.829305 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:39:22.829312 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:39:22.829319 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:39:22.829328 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 00:39:22.829334 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:39:22.829341 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:39:22.829347 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:39:22.829354 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:39:22.829360 kernel: NET: Registered PF_XDP protocol family Jul 10 00:39:22.829457 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 00:39:22.829552 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 00:39:22.829646 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 00:39:22.829744 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 10 00:39:22.831857 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 10 00:39:22.831962 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jul 10 00:39:22.831972 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:39:22.831979 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 10 00:39:22.831986 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jul 10 00:39:22.831993 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Jul 10 00:39:22.831999 kernel: Initialise system trusted keyrings Jul 10 00:39:22.832010 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:39:22.832016 kernel: Key type asymmetric registered Jul 10 00:39:22.832023 kernel: Asymmetric key parser 'x509' registered Jul 10 00:39:22.832030 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:39:22.832036 kernel: io scheduler mq-deadline registered Jul 10 00:39:22.832043 kernel: io scheduler kyber registered Jul 10 00:39:22.832049 kernel: io scheduler bfq registered Jul 10 00:39:22.832056 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:39:22.832063 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 10 00:39:22.832070 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 10 00:39:22.832078 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:39:22.832085 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:39:22.832092 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 00:39:22.832099 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:39:22.832105 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:39:22.832283 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 10 00:39:22.832296 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 00:39:22.832396 kernel: rtc_cmos 00:03: registered as rtc0 Jul 10 00:39:22.832498 kernel: rtc_cmos 00:03: setting system clock to 2025-07-10T00:39:22 UTC (1752107962) Jul 10 00:39:22.832601 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 10 00:39:22.832610 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 10 00:39:22.832617 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:39:22.832623 kernel: Segment Routing with IPv6 Jul 10 00:39:22.832630 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:39:22.832637 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:39:22.832643 kernel: Key type dns_resolver registered Jul 10 00:39:22.832650 kernel: IPI shorthand broadcast: enabled Jul 10 00:39:22.832659 kernel: sched_clock: Marking stable (2447003999, 202932404)->(2678658618, -28722215) Jul 10 00:39:22.832665 kernel: registered taskstats version 1 Jul 10 00:39:22.832672 kernel: Loading compiled-in X.509 certificates Jul 10 00:39:22.832678 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:39:22.832685 kernel: Demotion targets for Node 0: null Jul 10 00:39:22.832691 kernel: Key type .fscrypt registered Jul 10 00:39:22.832698 kernel: Key type fscrypt-provisioning registered Jul 10 00:39:22.832704 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:39:22.832713 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:39:22.832719 kernel: ima: No architecture policies found Jul 10 00:39:22.832726 kernel: clk: Disabling unused clocks Jul 10 00:39:22.832732 kernel: Warning: unable to open an initial console. Jul 10 00:39:22.832739 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:39:22.832746 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:39:22.832752 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:39:22.832759 kernel: Run /init as init process Jul 10 00:39:22.832765 kernel: with arguments: Jul 10 00:39:22.832773 kernel: /init Jul 10 00:39:22.832780 kernel: with environment: Jul 10 00:39:22.832786 kernel: HOME=/ Jul 10 00:39:22.832792 kernel: TERM=linux Jul 10 00:39:22.832799 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:39:22.833063 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:39:22.833078 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:39:22.833087 systemd[1]: Detected virtualization kvm. Jul 10 00:39:22.833096 systemd[1]: Detected architecture x86-64. Jul 10 00:39:22.833121 systemd[1]: Running in initrd. Jul 10 00:39:22.833129 systemd[1]: No hostname configured, using default hostname. Jul 10 00:39:22.833136 systemd[1]: Hostname set to . Jul 10 00:39:22.833144 systemd[1]: Initializing machine ID from random generator. Jul 10 00:39:22.833151 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:39:22.833158 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:39:22.833166 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:39:22.833176 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:39:22.833183 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:39:22.833191 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:39:22.833199 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:39:22.833207 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:39:22.833215 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:39:22.833224 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:39:22.833231 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:39:22.833239 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:39:22.833246 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:39:22.833253 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:39:22.833260 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:39:22.833268 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:39:22.833275 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:39:22.833282 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:39:22.833291 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:39:22.833299 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:39:22.833306 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:39:22.833313 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:39:22.833321 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:39:22.833330 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:39:22.833339 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:39:22.833346 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:39:22.833354 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:39:22.833361 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:39:22.833369 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:39:22.833376 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:39:22.833383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:39:22.833411 systemd-journald[206]: Collecting audit messages is disabled. Jul 10 00:39:22.833434 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:39:22.833444 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:39:22.833451 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:39:22.833459 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:39:22.833467 systemd-journald[206]: Journal started Jul 10 00:39:22.833483 systemd-journald[206]: Runtime Journal (/run/log/journal/d8b37ff638e3408595df39129f79440e) is 8M, max 78.5M, 70.5M free. Jul 10 00:39:22.833161 systemd-modules-load[208]: Inserted module 'overlay' Jul 10 00:39:22.837851 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:39:22.842320 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:39:22.913661 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:39:22.913678 kernel: Bridge firewalling registered Jul 10 00:39:22.862083 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 10 00:39:22.914275 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:39:22.915638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:39:22.921317 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:39:22.923944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:39:22.927063 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:39:22.931897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:39:22.940305 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:39:22.943381 systemd-tmpfiles[228]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:39:22.945421 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:39:22.947835 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:39:22.950943 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:39:22.951686 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:39:22.954901 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:39:22.968932 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:39:22.986951 systemd-resolved[244]: Positive Trust Anchors: Jul 10 00:39:22.987588 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:39:22.987618 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:39:22.992236 systemd-resolved[244]: Defaulting to hostname 'linux'. Jul 10 00:39:22.993202 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:39:22.994049 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:39:23.035847 kernel: SCSI subsystem initialized Jul 10 00:39:23.044870 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:39:23.053836 kernel: iscsi: registered transport (tcp) Jul 10 00:39:23.073174 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:39:23.073216 kernel: QLogic iSCSI HBA Driver Jul 10 00:39:23.089062 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:39:23.103573 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:39:23.105400 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:39:23.137779 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:39:23.139891 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:39:23.179836 kernel: raid6: avx2x4 gen() 29408 MB/s Jul 10 00:39:23.196831 kernel: raid6: avx2x2 gen() 29724 MB/s Jul 10 00:39:23.215458 kernel: raid6: avx2x1 gen() 19012 MB/s Jul 10 00:39:23.215472 kernel: raid6: using algorithm avx2x2 gen() 29724 MB/s Jul 10 00:39:23.234214 kernel: raid6: .... xor() 29885 MB/s, rmw enabled Jul 10 00:39:23.234230 kernel: raid6: using avx2x2 recovery algorithm Jul 10 00:39:23.252838 kernel: xor: automatically using best checksumming function avx Jul 10 00:39:23.372846 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:39:23.378878 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:39:23.380767 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:39:23.399139 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jul 10 00:39:23.403406 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:39:23.405708 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:39:23.426048 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Jul 10 00:39:23.446024 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:39:23.447339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:39:23.498422 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:39:23.501290 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:39:23.553834 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jul 10 00:39:23.561843 kernel: scsi host0: Virtio SCSI HBA Jul 10 00:39:23.617860 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 10 00:39:23.623892 kernel: libata version 3.00 loaded. Jul 10 00:39:23.640835 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:39:23.652776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:39:23.652913 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:39:23.660879 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 10 00:39:23.656940 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:39:23.658418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:39:23.679116 kernel: AES CTR mode by8 optimization enabled Jul 10 00:39:23.679165 kernel: ahci 0000:00:1f.2: version 3.0 Jul 10 00:39:23.683846 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 10 00:39:23.687670 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 10 00:39:23.687857 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 10 00:39:23.687994 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 10 00:39:23.700240 kernel: scsi host1: ahci Jul 10 00:39:23.703603 kernel: sd 0:0:0:0: Power-on or device reset occurred Jul 10 00:39:23.705434 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jul 10 00:39:23.705578 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 10 00:39:23.705716 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jul 10 00:39:23.713855 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 10 00:39:23.721881 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:39:23.721902 kernel: GPT:9289727 != 167739391 Jul 10 00:39:23.721913 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:39:23.721923 kernel: GPT:9289727 != 167739391 Jul 10 00:39:23.721931 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:39:23.721939 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 00:39:23.721952 kernel: scsi host2: ahci Jul 10 00:39:23.723985 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 10 00:39:23.727914 kernel: scsi host3: ahci Jul 10 00:39:23.728860 kernel: scsi host4: ahci Jul 10 00:39:23.729074 kernel: scsi host5: ahci Jul 10 00:39:23.731999 kernel: scsi host6: ahci Jul 10 00:39:23.732157 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Jul 10 00:39:23.732173 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Jul 10 00:39:23.732182 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Jul 10 00:39:23.732191 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Jul 10 00:39:23.732199 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Jul 10 00:39:23.732208 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Jul 10 00:39:23.802162 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 10 00:39:23.828942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:39:23.837450 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 10 00:39:23.844194 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 10 00:39:23.844780 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 10 00:39:23.853707 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 10 00:39:23.855878 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:39:23.874157 disk-uuid[625]: Primary Header is updated. Jul 10 00:39:23.874157 disk-uuid[625]: Secondary Entries is updated. Jul 10 00:39:23.874157 disk-uuid[625]: Secondary Header is updated. Jul 10 00:39:23.884847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 00:39:24.043449 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 10 00:39:24.043492 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 10 00:39:24.043503 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 10 00:39:24.043512 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 10 00:39:24.043829 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 10 00:39:24.045843 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 10 00:39:24.058881 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:39:24.059915 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:39:24.060790 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:39:24.062097 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:39:24.064962 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:39:24.079146 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:39:24.899938 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 00:39:24.901028 disk-uuid[626]: The operation has completed successfully. Jul 10 00:39:24.945476 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:39:24.945588 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:39:24.971592 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:39:24.982140 sh[653]: Success Jul 10 00:39:24.999568 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:39:24.999593 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:39:25.000198 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:39:25.010855 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 10 00:39:25.055014 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:39:25.060413 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:39:25.071452 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:39:25.081269 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:39:25.081290 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (665) Jul 10 00:39:25.087067 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:39:25.087124 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:39:25.089166 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:39:25.098077 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:39:25.099003 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:39:25.099868 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:39:25.101910 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:39:25.103041 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:39:25.129330 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (698) Jul 10 00:39:25.129353 kernel: BTRFS info (device sda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:39:25.132881 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:39:25.132908 kernel: BTRFS info (device sda6): using free-space-tree Jul 10 00:39:25.144915 kernel: BTRFS info (device sda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:39:25.145524 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:39:25.147038 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:39:25.228840 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:39:25.235932 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:39:25.260160 ignition[749]: Ignition 2.21.0 Jul 10 00:39:25.260176 ignition[749]: Stage: fetch-offline Jul 10 00:39:25.260207 ignition[749]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:39:25.260215 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 10 00:39:25.262198 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:39:25.260294 ignition[749]: parsed url from cmdline: "" Jul 10 00:39:25.260298 ignition[749]: no config URL provided Jul 10 00:39:25.260303 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:39:25.260311 ignition[749]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:39:25.260316 ignition[749]: failed to fetch config: resource requires networking Jul 10 00:39:25.260475 ignition[749]: Ignition finished successfully Jul 10 00:39:25.275521 systemd-networkd[836]: lo: Link UP Jul 10 00:39:25.275530 systemd-networkd[836]: lo: Gained carrier Jul 10 00:39:25.276702 systemd-networkd[836]: Enumeration completed Jul 10 00:39:25.276777 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:39:25.277330 systemd[1]: Reached target network.target - Network. Jul 10 00:39:25.277620 systemd-networkd[836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:39:25.277624 systemd-networkd[836]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:39:25.279977 systemd-networkd[836]: eth0: Link UP Jul 10 00:39:25.279980 systemd-networkd[836]: eth0: Gained carrier Jul 10 00:39:25.279988 systemd-networkd[836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:39:25.280636 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 00:39:25.299636 ignition[844]: Ignition 2.21.0 Jul 10 00:39:25.300281 ignition[844]: Stage: fetch Jul 10 00:39:25.300389 ignition[844]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:39:25.300399 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 10 00:39:25.300552 ignition[844]: parsed url from cmdline: "" Jul 10 00:39:25.300556 ignition[844]: no config URL provided Jul 10 00:39:25.300560 ignition[844]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:39:25.300569 ignition[844]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:39:25.300601 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #1 Jul 10 00:39:25.300765 ignition[844]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 10 00:39:25.501673 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #2 Jul 10 00:39:25.501829 ignition[844]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 10 00:39:25.730880 systemd-networkd[836]: eth0: DHCPv4 address 172.238.161.214/24, gateway 172.238.161.1 acquired from 23.40.197.117 Jul 10 00:39:25.902474 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #3 Jul 10 00:39:25.993725 ignition[844]: PUT result: OK Jul 10 00:39:25.993800 ignition[844]: GET http://169.254.169.254/v1/user-data: attempt #1 Jul 10 00:39:26.112791 ignition[844]: GET result: OK Jul 10 00:39:26.112961 ignition[844]: parsing config with SHA512: 1dea183016ebbd3ceac0c4ba5553467f8b9b98ea75dd5c2d831b3e6ea5d524581e7d3f9381e10bdfade98da286004d49388054dc8b0c101776575817b54cf657 Jul 10 00:39:26.116230 unknown[844]: fetched base config from "system" Jul 10 00:39:26.116897 unknown[844]: fetched base config from "system" Jul 10 00:39:26.117137 ignition[844]: fetch: fetch complete Jul 10 00:39:26.116904 unknown[844]: fetched user config from "akamai" Jul 10 00:39:26.117142 ignition[844]: fetch: fetch passed Jul 10 00:39:26.117180 ignition[844]: Ignition finished successfully Jul 10 00:39:26.120560 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 00:39:26.122877 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:39:26.152115 ignition[852]: Ignition 2.21.0 Jul 10 00:39:26.152126 ignition[852]: Stage: kargs Jul 10 00:39:26.152232 ignition[852]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:39:26.163925 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:39:26.152241 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 10 00:39:26.153328 ignition[852]: kargs: kargs passed Jul 10 00:39:26.153377 ignition[852]: Ignition finished successfully Jul 10 00:39:26.177933 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:39:26.209057 ignition[859]: Ignition 2.21.0 Jul 10 00:39:26.209069 ignition[859]: Stage: disks Jul 10 00:39:26.209192 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:39:26.209202 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 10 00:39:26.209914 ignition[859]: disks: disks passed Jul 10 00:39:26.211235 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:39:26.209950 ignition[859]: Ignition finished successfully Jul 10 00:39:26.212357 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:39:26.213159 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:39:26.214235 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:39:26.215302 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:39:26.216486 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:39:26.218450 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:39:26.241392 systemd-fsck[867]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 00:39:26.244598 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:39:26.246088 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:39:26.347849 kernel: EXT4-fs (sda9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:39:26.347851 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:39:26.348794 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:39:26.350619 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:39:26.353879 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:39:26.355173 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:39:26.355980 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:39:26.356004 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:39:26.361477 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:39:26.363967 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:39:26.369844 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (875) Jul 10 00:39:26.373039 kernel: BTRFS info (device sda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:39:26.373060 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:39:26.375723 kernel: BTRFS info (device sda6): using free-space-tree Jul 10 00:39:26.379728 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:39:26.414855 initrd-setup-root[899]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:39:26.420360 initrd-setup-root[906]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:39:26.424951 initrd-setup-root[913]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:39:26.429263 initrd-setup-root[920]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:39:26.517888 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:39:26.520679 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:39:26.522462 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:39:26.536642 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:39:26.539215 kernel: BTRFS info (device sda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:39:26.554334 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:39:26.564089 ignition[988]: INFO : Ignition 2.21.0 Jul 10 00:39:26.564089 ignition[988]: INFO : Stage: mount Jul 10 00:39:26.565317 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:39:26.565317 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 10 00:39:26.565317 ignition[988]: INFO : mount: mount passed Jul 10 00:39:26.565317 ignition[988]: INFO : Ignition finished successfully Jul 10 00:39:26.566831 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:39:26.568557 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:39:26.735974 systemd-networkd[836]: eth0: Gained IPv6LL Jul 10 00:39:27.349287 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:39:27.375854 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1000) Jul 10 00:39:27.378975 kernel: BTRFS info (device sda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:39:27.379036 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:39:27.381659 kernel: BTRFS info (device sda6): using free-space-tree Jul 10 00:39:27.385675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:39:27.415901 ignition[1016]: INFO : Ignition 2.21.0 Jul 10 00:39:27.415901 ignition[1016]: INFO : Stage: files Jul 10 00:39:27.417090 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:39:27.417090 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 10 00:39:27.418617 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:39:27.419851 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:39:27.420662 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:39:27.423159 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:39:27.424094 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:39:27.425201 unknown[1016]: wrote ssh authorized keys file for user: core Jul 10 00:39:27.425956 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:39:27.427950 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 10 00:39:27.428945 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 10 00:39:27.712288 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:39:27.862758 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 10 00:39:27.862758 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:39:27.864864 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:39:27.864864 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:39:27.864864 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:39:27.864864 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:39:27.864864 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:39:27.864864 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:39:27.864864 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:39:27.870673 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:39:27.870673 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:39:27.870673 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:39:27.870673 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:39:27.870673 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:39:27.870673 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 10 00:39:28.258922 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 00:39:28.653135 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:39:28.653135 ignition[1016]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 00:39:28.656080 ignition[1016]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:39:28.657896 ignition[1016]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:39:28.657896 ignition[1016]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 00:39:28.657896 ignition[1016]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 10 00:39:28.660311 ignition[1016]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 10 00:39:28.660311 ignition[1016]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 10 00:39:28.660311 ignition[1016]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 10 00:39:28.660311 ignition[1016]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:39:28.660311 ignition[1016]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:39:28.660311 ignition[1016]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:39:28.660311 ignition[1016]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:39:28.660311 ignition[1016]: INFO : files: files passed Jul 10 00:39:28.660311 ignition[1016]: INFO : Ignition finished successfully Jul 10 00:39:28.661487 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:39:28.663889 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:39:28.666920 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:39:28.682033 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:39:28.682174 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:39:28.688418 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:39:28.689739 initrd-setup-root-after-ignition[1047]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:39:28.690607 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:39:28.692055 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:39:28.693116 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:39:28.694922 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:39:28.749907 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:39:28.750025 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:39:28.751315 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:39:28.752346 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:39:28.753582 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:39:28.754266 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:39:28.783551 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:39:28.785535 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:39:28.803917 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:39:28.805049 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:39:28.805566 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:39:28.806039 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:39:28.806125 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:39:28.806723 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:39:28.807266 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:39:28.808316 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:39:28.809440 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:39:28.810538 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:39:28.811800 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:39:28.812876 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:39:28.813993 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:39:28.815195 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:39:28.816466 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:39:28.817652 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:39:28.818832 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:39:28.818975 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:39:28.820533 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:39:28.821346 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:39:28.822302 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:39:28.822387 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:39:28.823288 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:39:28.823398 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:39:28.824435 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:39:28.824526 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:39:28.825195 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:39:28.825302 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:39:28.827878 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:39:28.828700 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:39:28.828842 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:39:28.831704 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:39:28.833952 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:39:28.834058 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:39:28.835921 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:39:28.836003 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:39:28.843739 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:39:28.844500 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:39:28.851284 ignition[1071]: INFO : Ignition 2.21.0 Jul 10 00:39:28.852700 ignition[1071]: INFO : Stage: umount Jul 10 00:39:28.852700 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:39:28.852700 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 10 00:39:28.852700 ignition[1071]: INFO : umount: umount passed Jul 10 00:39:28.852700 ignition[1071]: INFO : Ignition finished successfully Jul 10 00:39:28.857617 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:39:28.857758 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:39:28.859057 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:39:28.859106 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:39:28.881035 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:39:28.881094 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:39:28.881971 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 00:39:28.882009 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 00:39:28.882811 systemd[1]: Stopped target network.target - Network. Jul 10 00:39:28.883647 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:39:28.883688 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:39:28.884581 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:39:28.885419 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:39:28.888859 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:39:28.890083 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:39:28.891138 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:39:28.892017 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:39:28.892055 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:39:28.892901 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:39:28.892932 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:39:28.893749 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:39:28.893791 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:39:28.894648 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:39:28.894686 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:39:28.895616 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:39:28.896481 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:39:28.898191 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:39:28.898637 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:39:28.898734 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:39:28.899972 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:39:28.900051 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:39:28.901947 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:39:28.902052 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:39:28.905839 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:39:28.906084 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:39:28.906188 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:39:28.907639 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:39:28.908366 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:39:28.909036 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:39:28.909088 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:39:28.911078 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:39:28.913081 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:39:28.913125 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:39:28.914949 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:39:28.914989 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:39:28.916213 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:39:28.916252 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:39:28.916877 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:39:28.916915 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:39:28.918175 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:39:28.920431 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:39:28.920484 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:39:28.932016 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:39:28.932149 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:39:28.935165 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:39:28.935340 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:39:28.936771 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:39:28.936853 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:39:28.937872 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:39:28.937911 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:39:28.939120 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:39:28.939168 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:39:28.940778 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:39:28.940839 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:39:28.941991 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:39:28.942044 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:39:28.944917 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:39:28.945427 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:39:28.945474 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:39:28.946978 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:39:28.947019 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:39:28.949028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:39:28.949091 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:39:28.951789 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 00:39:28.951866 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:39:28.951922 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:39:28.959059 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:39:28.959169 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:39:28.960781 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:39:28.962327 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:39:28.976481 systemd[1]: Switching root. Jul 10 00:39:29.007156 systemd-journald[206]: Journal stopped Jul 10 00:39:30.026172 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Jul 10 00:39:30.026196 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:39:30.026207 kernel: SELinux: policy capability open_perms=1 Jul 10 00:39:30.026219 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:39:30.026227 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:39:30.026235 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:39:30.026244 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:39:30.026253 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:39:30.026261 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:39:30.026270 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:39:30.026281 kernel: audit: type=1403 audit(1752107969.142:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:39:30.026290 systemd[1]: Successfully loaded SELinux policy in 47.584ms. Jul 10 00:39:30.026300 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.713ms. Jul 10 00:39:30.026311 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:39:30.026321 systemd[1]: Detected virtualization kvm. Jul 10 00:39:30.026333 systemd[1]: Detected architecture x86-64. Jul 10 00:39:30.026342 systemd[1]: Detected first boot. Jul 10 00:39:30.026351 systemd[1]: Initializing machine ID from random generator. Jul 10 00:39:30.026361 zram_generator::config[1114]: No configuration found. Jul 10 00:39:30.026371 kernel: Guest personality initialized and is inactive Jul 10 00:39:30.026380 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 00:39:30.026388 kernel: Initialized host personality Jul 10 00:39:30.026399 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:39:30.026409 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:39:30.026419 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:39:30.026428 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:39:30.026438 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:39:30.026447 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:39:30.026457 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:39:30.026468 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:39:30.026478 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:39:30.026488 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:39:30.026498 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:39:30.026508 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:39:30.026518 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:39:30.026527 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:39:30.026539 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:39:30.026548 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:39:30.026558 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:39:30.026568 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:39:30.026581 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:39:30.026590 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:39:30.026600 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:39:30.026610 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:39:30.026622 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:39:30.026632 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:39:30.026642 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:39:30.026651 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:39:30.026661 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:39:30.026671 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:39:30.026681 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:39:30.026691 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:39:30.026703 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:39:30.026712 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:39:30.026722 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:39:30.026732 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:39:30.026742 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:39:30.026754 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:39:30.026764 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:39:30.026775 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:39:30.026785 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:39:30.026795 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:39:30.026804 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:39:30.026834 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:39:30.026844 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:39:30.026857 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:39:30.026866 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:39:30.026877 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:39:30.026886 systemd[1]: Reached target machines.target - Containers. Jul 10 00:39:30.026897 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:39:30.026906 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:39:30.026916 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:39:30.026926 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:39:30.026938 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:39:30.026948 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:39:30.026957 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:39:30.026967 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:39:30.026976 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:39:30.026986 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:39:30.026996 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:39:30.027007 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:39:30.027017 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:39:30.027029 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:39:30.027039 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:39:30.027049 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:39:30.027058 kernel: fuse: init (API version 7.41) Jul 10 00:39:30.027067 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:39:30.027077 kernel: loop: module loaded Jul 10 00:39:30.027086 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:39:30.027096 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:39:30.027107 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:39:30.027117 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:39:30.027127 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:39:30.027136 systemd[1]: Stopped verity-setup.service. Jul 10 00:39:30.027146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:39:30.027156 kernel: ACPI: bus type drm_connector registered Jul 10 00:39:30.027165 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:39:30.027174 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:39:30.027186 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:39:30.027196 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:39:30.027205 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:39:30.027236 systemd-journald[1198]: Collecting audit messages is disabled. Jul 10 00:39:30.027255 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:39:30.027268 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:39:30.027278 systemd-journald[1198]: Journal started Jul 10 00:39:30.027297 systemd-journald[1198]: Runtime Journal (/run/log/journal/f308a418601c42178c1abe58f8a21eef) is 8M, max 78.5M, 70.5M free. Jul 10 00:39:29.701904 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:39:29.707060 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 10 00:39:29.707502 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:39:30.030846 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:39:30.032344 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:39:30.033361 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:39:30.033669 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:39:30.034584 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:39:30.034930 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:39:30.035770 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:39:30.036154 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:39:30.037035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:39:30.037321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:39:30.038351 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:39:30.038611 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:39:30.039603 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:39:30.039910 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:39:30.040790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:39:30.041765 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:39:30.042801 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:39:30.044242 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:39:30.059672 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:39:30.062927 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:39:30.066062 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:39:30.066684 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:39:30.066758 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:39:30.068668 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:39:30.076453 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:39:30.079212 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:39:30.081914 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:39:30.088272 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:39:30.089209 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:39:30.095946 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:39:30.096485 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:39:30.099009 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:39:30.102963 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:39:30.108981 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:39:30.113134 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:39:30.114262 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:39:30.130613 systemd-journald[1198]: Time spent on flushing to /var/log/journal/f308a418601c42178c1abe58f8a21eef is 55.104ms for 994 entries. Jul 10 00:39:30.130613 systemd-journald[1198]: System Journal (/var/log/journal/f308a418601c42178c1abe58f8a21eef) is 8M, max 195.6M, 187.6M free. Jul 10 00:39:30.202544 systemd-journald[1198]: Received client request to flush runtime journal. Jul 10 00:39:30.202594 kernel: loop0: detected capacity change from 0 to 8 Jul 10 00:39:30.202616 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:39:30.140312 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:39:30.141581 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:39:30.146008 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:39:30.186751 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:39:30.204934 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:39:30.207700 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:39:30.215317 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:39:30.220891 kernel: loop1: detected capacity change from 0 to 113872 Jul 10 00:39:30.233861 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:39:30.239948 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:39:30.255056 kernel: loop2: detected capacity change from 0 to 146240 Jul 10 00:39:30.291157 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 10 00:39:30.291175 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 10 00:39:30.301601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:39:30.307898 kernel: loop3: detected capacity change from 0 to 229808 Jul 10 00:39:30.338842 kernel: loop4: detected capacity change from 0 to 8 Jul 10 00:39:30.342843 kernel: loop5: detected capacity change from 0 to 113872 Jul 10 00:39:30.360844 kernel: loop6: detected capacity change from 0 to 146240 Jul 10 00:39:30.378836 kernel: loop7: detected capacity change from 0 to 229808 Jul 10 00:39:30.403320 (sd-merge)[1262]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jul 10 00:39:30.404513 (sd-merge)[1262]: Merged extensions into '/usr'. Jul 10 00:39:30.413127 systemd[1]: Reload requested from client PID 1239 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:39:30.413147 systemd[1]: Reloading... Jul 10 00:39:30.521873 zram_generator::config[1291]: No configuration found. Jul 10 00:39:30.620844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:39:30.644838 ldconfig[1234]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:39:30.699603 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:39:30.699795 systemd[1]: Reloading finished in 285 ms. Jul 10 00:39:30.716876 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:39:30.718140 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:39:30.729005 systemd[1]: Starting ensure-sysext.service... Jul 10 00:39:30.733758 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:39:30.743232 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:39:30.747102 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:39:30.752182 systemd[1]: Reload requested from client PID 1331 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:39:30.752260 systemd[1]: Reloading... Jul 10 00:39:30.763709 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:39:30.764160 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:39:30.764437 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:39:30.764695 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:39:30.765544 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:39:30.765782 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Jul 10 00:39:30.766367 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Jul 10 00:39:30.775952 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:39:30.776022 systemd-tmpfiles[1332]: Skipping /boot Jul 10 00:39:30.788174 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Jul 10 00:39:30.799948 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:39:30.799960 systemd-tmpfiles[1332]: Skipping /boot Jul 10 00:39:30.849850 zram_generator::config[1356]: No configuration found. Jul 10 00:39:30.994308 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:39:31.085196 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:39:31.085758 systemd[1]: Reloading finished in 333 ms. Jul 10 00:39:31.094396 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:39:31.095842 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:39:31.105669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:39:31.113844 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 10 00:39:31.133844 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 10 00:39:31.146158 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:39:31.150839 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 00:39:31.162407 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:39:31.164999 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:39:31.169125 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:39:31.169793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:39:31.171046 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:39:31.176381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:39:31.180774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:39:31.181991 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:39:31.182085 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:39:31.183344 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:39:31.189036 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:39:31.197208 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:39:31.201664 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:39:31.202269 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:39:31.205660 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:39:31.206935 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:39:31.213359 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:39:31.213538 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:39:31.214990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:39:31.215960 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:39:31.216042 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:39:31.216115 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:39:31.222935 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:39:31.223193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:39:31.225941 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:39:31.226641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:39:31.226722 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:39:31.226854 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:39:31.235033 systemd[1]: Finished ensure-sysext.service. Jul 10 00:39:31.244614 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:39:31.245560 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:39:31.245794 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:39:31.255190 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:39:31.275137 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:39:31.280304 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:39:31.281465 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:39:31.283032 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:39:31.284214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:39:31.284421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:39:31.286182 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:39:31.291084 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:39:31.301414 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:39:31.301495 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:39:31.308511 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:39:31.314683 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 10 00:39:31.318218 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:39:31.324780 augenrules[1497]: No rules Jul 10 00:39:31.325134 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:39:31.326146 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:39:31.326476 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:39:31.326904 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:39:31.345209 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:39:31.352223 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:39:31.363962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:39:31.383886 kernel: EDAC MC: Ver: 3.0.0 Jul 10 00:39:31.434484 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:39:31.531247 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:39:31.589635 systemd-networkd[1458]: lo: Link UP Jul 10 00:39:31.589853 systemd-networkd[1458]: lo: Gained carrier Jul 10 00:39:31.590059 systemd-resolved[1460]: Positive Trust Anchors: Jul 10 00:39:31.590067 systemd-resolved[1460]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:39:31.590093 systemd-resolved[1460]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:39:31.593153 systemd-networkd[1458]: Enumeration completed Jul 10 00:39:31.593224 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:39:31.593438 systemd-resolved[1460]: Defaulting to hostname 'linux'. Jul 10 00:39:31.594221 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:39:31.595869 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:39:31.596538 systemd-networkd[1458]: eth0: Link UP Jul 10 00:39:31.596665 systemd-networkd[1458]: eth0: Gained carrier Jul 10 00:39:31.596687 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:39:31.596975 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:39:31.599021 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:39:31.599647 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:39:31.600521 systemd[1]: Reached target network.target - Network. Jul 10 00:39:31.601215 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:39:31.607012 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:39:31.607612 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:39:31.608256 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:39:31.608929 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:39:31.609491 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:39:31.610045 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:39:31.610593 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:39:31.610620 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:39:31.611247 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:39:31.611955 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:39:31.614351 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:39:31.614895 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:39:31.617429 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:39:31.619488 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:39:31.622441 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:39:31.623164 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:39:31.623730 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:39:31.626459 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:39:31.627378 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:39:31.628522 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:39:31.631010 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:39:31.631516 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:39:31.632025 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:39:31.632104 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:39:31.633940 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:39:31.636692 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 00:39:31.638845 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:39:31.641915 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:39:31.645045 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:39:31.653289 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:39:31.654210 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:39:31.656173 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:39:31.662865 jq[1531]: false Jul 10 00:39:31.664054 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:39:31.667268 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:39:31.670448 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:39:31.681437 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:39:31.695031 extend-filesystems[1532]: Found /dev/sda6 Jul 10 00:39:31.704129 oslogin_cache_refresh[1533]: Refreshing passwd entry cache Jul 10 00:39:31.712100 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing passwd entry cache Jul 10 00:39:31.712100 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting users, quitting Jul 10 00:39:31.712100 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:39:31.712100 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing group entry cache Jul 10 00:39:31.712100 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting groups, quitting Jul 10 00:39:31.712100 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:39:31.707104 oslogin_cache_refresh[1533]: Failure getting users, quitting Jul 10 00:39:31.707117 oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:39:31.707154 oslogin_cache_refresh[1533]: Refreshing group entry cache Jul 10 00:39:31.707601 oslogin_cache_refresh[1533]: Failure getting groups, quitting Jul 10 00:39:31.707609 oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:39:31.714632 extend-filesystems[1532]: Found /dev/sda9 Jul 10 00:39:31.720893 extend-filesystems[1532]: Checking size of /dev/sda9 Jul 10 00:39:31.717762 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:39:31.720590 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:39:31.721604 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:39:31.723958 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:39:31.727602 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:39:31.730870 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:39:31.737542 coreos-metadata[1528]: Jul 10 00:39:31.737 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 10 00:39:31.742377 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:39:31.744682 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:39:31.746045 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:39:31.746350 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:39:31.746645 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:39:31.747516 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:39:31.747737 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:39:31.750135 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:39:31.751125 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:39:31.753298 jq[1556]: true Jul 10 00:39:31.770006 extend-filesystems[1532]: Resized partition /dev/sda9 Jul 10 00:39:31.775856 extend-filesystems[1575]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 00:39:31.785841 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jul 10 00:39:31.794488 update_engine[1554]: I20250710 00:39:31.794239 1554 main.cc:92] Flatcar Update Engine starting Jul 10 00:39:31.800705 jq[1566]: true Jul 10 00:39:31.808790 tar[1562]: linux-amd64/LICENSE Jul 10 00:39:31.811415 tar[1562]: linux-amd64/helm Jul 10 00:39:31.827655 dbus-daemon[1529]: [system] SELinux support is enabled Jul 10 00:39:31.827825 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:39:31.828796 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:39:31.830656 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:39:31.830679 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:39:31.831883 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:39:31.831897 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:39:31.853994 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:39:31.861808 update_engine[1554]: I20250710 00:39:31.859117 1554 update_check_scheduler.cc:74] Next update check in 3m3s Jul 10 00:39:31.876995 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:39:31.951407 systemd-logind[1548]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 00:39:31.951432 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:39:31.953715 systemd-logind[1548]: New seat seat0. Jul 10 00:39:31.958413 bash[1596]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:39:31.958209 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:39:31.967272 systemd[1]: Starting sshkeys.service... Jul 10 00:39:31.968014 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:39:32.024947 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 10 00:39:32.030284 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 10 00:39:32.055866 systemd-networkd[1458]: eth0: DHCPv4 address 172.238.161.214/24, gateway 172.238.161.1 acquired from 23.40.197.117 Jul 10 00:39:32.057591 systemd-timesyncd[1480]: Network configuration changed, trying to establish connection. Jul 10 00:39:32.065501 dbus-daemon[1529]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1458 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 10 00:39:32.070786 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 10 00:39:32.084203 sshd_keygen[1560]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:39:32.131225 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jul 10 00:39:32.144699 extend-filesystems[1575]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 10 00:39:32.144699 extend-filesystems[1575]: old_desc_blocks = 1, new_desc_blocks = 10 Jul 10 00:39:32.144699 extend-filesystems[1575]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jul 10 00:39:32.150440 extend-filesystems[1532]: Resized filesystem in /dev/sda9 Jul 10 00:39:32.147496 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:39:32.147751 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:39:32.156519 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:39:32.160244 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:39:32.174263 coreos-metadata[1605]: Jul 10 00:39:32.173 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 10 00:39:32.174653 locksmithd[1583]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:39:33.356295 systemd-timesyncd[1480]: Contacted time server 23.150.41.123:123 (0.flatcar.pool.ntp.org). Jul 10 00:39:33.356341 systemd-timesyncd[1480]: Initial clock synchronization to Thu 2025-07-10 00:39:33.355948 UTC. Jul 10 00:39:33.357338 systemd-resolved[1460]: Clock change detected. Flushing caches. Jul 10 00:39:33.363523 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:39:33.364387 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:39:33.369038 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:39:33.378286 containerd[1580]: time="2025-07-10T00:39:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:39:33.380060 containerd[1580]: time="2025-07-10T00:39:33.380035532Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:39:33.382359 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 10 00:39:33.383007 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 10 00:39:33.386154 dbus-daemon[1529]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1610 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 10 00:39:33.392268 systemd[1]: Starting polkit.service - Authorization Manager... Jul 10 00:39:33.397563 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:39:33.400409 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:39:33.400532 containerd[1580]: time="2025-07-10T00:39:33.400510702Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.35µs" Jul 10 00:39:33.400582 containerd[1580]: time="2025-07-10T00:39:33.400568042Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:39:33.400655 containerd[1580]: time="2025-07-10T00:39:33.400617433Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:39:33.400858 containerd[1580]: time="2025-07-10T00:39:33.400841473Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:39:33.400910 containerd[1580]: time="2025-07-10T00:39:33.400898653Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:39:33.400964 containerd[1580]: time="2025-07-10T00:39:33.400953063Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:39:33.401066 containerd[1580]: time="2025-07-10T00:39:33.401049733Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:39:33.401110 containerd[1580]: time="2025-07-10T00:39:33.401099973Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:39:33.401384 containerd[1580]: time="2025-07-10T00:39:33.401364603Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:39:33.401433 containerd[1580]: time="2025-07-10T00:39:33.401422033Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:39:33.401477 containerd[1580]: time="2025-07-10T00:39:33.401465553Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:39:33.401514 containerd[1580]: time="2025-07-10T00:39:33.401504513Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:39:33.401684 containerd[1580]: time="2025-07-10T00:39:33.401667654Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:39:33.401954 containerd[1580]: time="2025-07-10T00:39:33.401937254Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:39:33.402023 containerd[1580]: time="2025-07-10T00:39:33.402009824Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:39:33.402062 containerd[1580]: time="2025-07-10T00:39:33.402052604Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:39:33.402131 containerd[1580]: time="2025-07-10T00:39:33.402119154Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:39:33.402490 containerd[1580]: time="2025-07-10T00:39:33.402474434Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:39:33.402595 containerd[1580]: time="2025-07-10T00:39:33.402581204Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:39:33.403875 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:39:33.405042 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:39:33.409166 containerd[1580]: time="2025-07-10T00:39:33.409127621Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:39:33.409205 containerd[1580]: time="2025-07-10T00:39:33.409195221Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:39:33.409226 containerd[1580]: time="2025-07-10T00:39:33.409215651Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:39:33.409244 containerd[1580]: time="2025-07-10T00:39:33.409227851Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:39:33.409305 containerd[1580]: time="2025-07-10T00:39:33.409282261Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:39:33.409327 containerd[1580]: time="2025-07-10T00:39:33.409303421Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:39:33.409345 containerd[1580]: time="2025-07-10T00:39:33.409328661Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:39:33.409345 containerd[1580]: time="2025-07-10T00:39:33.409340951Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:39:33.409377 containerd[1580]: time="2025-07-10T00:39:33.409355201Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:39:33.409377 containerd[1580]: time="2025-07-10T00:39:33.409364441Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:39:33.409377 containerd[1580]: time="2025-07-10T00:39:33.409372891Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:39:33.409429 containerd[1580]: time="2025-07-10T00:39:33.409384801Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:39:33.409774 containerd[1580]: time="2025-07-10T00:39:33.409494591Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:39:33.409774 containerd[1580]: time="2025-07-10T00:39:33.409517891Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:39:33.409774 containerd[1580]: time="2025-07-10T00:39:33.409533071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:39:33.409774 containerd[1580]: time="2025-07-10T00:39:33.409542301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:39:33.409774 containerd[1580]: time="2025-07-10T00:39:33.409551681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:39:33.409774 containerd[1580]: time="2025-07-10T00:39:33.409561721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:39:33.409774 containerd[1580]: time="2025-07-10T00:39:33.409573871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:39:33.409774 containerd[1580]: time="2025-07-10T00:39:33.409588951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:39:33.409774 containerd[1580]: time="2025-07-10T00:39:33.409598812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:39:33.409774 containerd[1580]: time="2025-07-10T00:39:33.409608222Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:39:33.410668 containerd[1580]: time="2025-07-10T00:39:33.409618382Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:39:33.410766 containerd[1580]: time="2025-07-10T00:39:33.410741373Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:39:33.410766 containerd[1580]: time="2025-07-10T00:39:33.410763673Z" level=info msg="Start snapshots syncer" Jul 10 00:39:33.411073 containerd[1580]: time="2025-07-10T00:39:33.411045403Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:39:33.411481 containerd[1580]: time="2025-07-10T00:39:33.411440303Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:39:33.411587 containerd[1580]: time="2025-07-10T00:39:33.411494523Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:39:33.414296 containerd[1580]: time="2025-07-10T00:39:33.414207436Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:39:33.414364 containerd[1580]: time="2025-07-10T00:39:33.414335266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:39:33.414388 containerd[1580]: time="2025-07-10T00:39:33.414365046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:39:33.414388 containerd[1580]: time="2025-07-10T00:39:33.414376706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:39:33.414421 containerd[1580]: time="2025-07-10T00:39:33.414387216Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:39:33.414421 containerd[1580]: time="2025-07-10T00:39:33.414405846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:39:33.414421 containerd[1580]: time="2025-07-10T00:39:33.414415266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:39:33.414478 containerd[1580]: time="2025-07-10T00:39:33.414426076Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:39:33.414478 containerd[1580]: time="2025-07-10T00:39:33.414446136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:39:33.414478 containerd[1580]: time="2025-07-10T00:39:33.414455396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:39:33.414478 containerd[1580]: time="2025-07-10T00:39:33.414470856Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:39:33.416570 containerd[1580]: time="2025-07-10T00:39:33.416378958Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:39:33.416570 containerd[1580]: time="2025-07-10T00:39:33.416407188Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:39:33.416570 containerd[1580]: time="2025-07-10T00:39:33.416461178Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:39:33.416570 containerd[1580]: time="2025-07-10T00:39:33.416474468Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:39:33.416570 containerd[1580]: time="2025-07-10T00:39:33.416482268Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:39:33.416570 containerd[1580]: time="2025-07-10T00:39:33.416495958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:39:33.416570 containerd[1580]: time="2025-07-10T00:39:33.416506388Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:39:33.416570 containerd[1580]: time="2025-07-10T00:39:33.416523288Z" level=info msg="runtime interface created" Jul 10 00:39:33.416570 containerd[1580]: time="2025-07-10T00:39:33.416528568Z" level=info msg="created NRI interface" Jul 10 00:39:33.416570 containerd[1580]: time="2025-07-10T00:39:33.416536708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:39:33.416570 containerd[1580]: time="2025-07-10T00:39:33.416548078Z" level=info msg="Connect containerd service" Jul 10 00:39:33.416570 containerd[1580]: time="2025-07-10T00:39:33.416572408Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:39:33.419144 containerd[1580]: time="2025-07-10T00:39:33.418888201Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:39:33.452187 coreos-metadata[1605]: Jul 10 00:39:33.451 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jul 10 00:39:33.492095 polkitd[1632]: Started polkitd version 126 Jul 10 00:39:33.500657 polkitd[1632]: Loading rules from directory /etc/polkit-1/rules.d Jul 10 00:39:33.501554 polkitd[1632]: Loading rules from directory /run/polkit-1/rules.d Jul 10 00:39:33.501652 polkitd[1632]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 10 00:39:33.501913 polkitd[1632]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 10 00:39:33.501935 polkitd[1632]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 10 00:39:33.501968 polkitd[1632]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 10 00:39:33.503177 polkitd[1632]: Finished loading, compiling and executing 2 rules Jul 10 00:39:33.503788 systemd[1]: Started polkit.service - Authorization Manager. Jul 10 00:39:33.504335 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 10 00:39:33.506334 polkitd[1632]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 10 00:39:33.527456 systemd-resolved[1460]: System hostname changed to '172-238-161-214'. Jul 10 00:39:33.527785 systemd-hostnamed[1610]: Hostname set to <172-238-161-214> (transient) Jul 10 00:39:33.539196 containerd[1580]: time="2025-07-10T00:39:33.539157251Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:39:33.539240 containerd[1580]: time="2025-07-10T00:39:33.539228101Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:39:33.539261 containerd[1580]: time="2025-07-10T00:39:33.539240361Z" level=info msg="Start subscribing containerd event" Jul 10 00:39:33.539280 containerd[1580]: time="2025-07-10T00:39:33.539265461Z" level=info msg="Start recovering state" Jul 10 00:39:33.539423 containerd[1580]: time="2025-07-10T00:39:33.539330571Z" level=info msg="Start event monitor" Jul 10 00:39:33.539423 containerd[1580]: time="2025-07-10T00:39:33.539347181Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:39:33.539423 containerd[1580]: time="2025-07-10T00:39:33.539355091Z" level=info msg="Start streaming server" Jul 10 00:39:33.539423 containerd[1580]: time="2025-07-10T00:39:33.539368181Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:39:33.539423 containerd[1580]: time="2025-07-10T00:39:33.539375321Z" level=info msg="runtime interface starting up..." Jul 10 00:39:33.539423 containerd[1580]: time="2025-07-10T00:39:33.539380751Z" level=info msg="starting plugins..." Jul 10 00:39:33.539423 containerd[1580]: time="2025-07-10T00:39:33.539394751Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:39:33.540461 containerd[1580]: time="2025-07-10T00:39:33.539529731Z" level=info msg="containerd successfully booted in 0.161629s" Jul 10 00:39:33.539567 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:39:33.585134 coreos-metadata[1605]: Jul 10 00:39:33.584 INFO Fetch successful Jul 10 00:39:33.604559 update-ssh-keys[1658]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:39:33.605585 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 10 00:39:33.608272 systemd[1]: Finished sshkeys.service. Jul 10 00:39:33.765878 tar[1562]: linux-amd64/README.md Jul 10 00:39:33.785041 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:39:33.916230 coreos-metadata[1528]: Jul 10 00:39:33.916 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jul 10 00:39:34.007730 coreos-metadata[1528]: Jul 10 00:39:34.007 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jul 10 00:39:34.196223 coreos-metadata[1528]: Jul 10 00:39:34.196 INFO Fetch successful Jul 10 00:39:34.196489 coreos-metadata[1528]: Jul 10 00:39:34.196 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jul 10 00:39:34.448833 coreos-metadata[1528]: Jul 10 00:39:34.448 INFO Fetch successful Jul 10 00:39:34.555033 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 00:39:34.556060 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:39:34.689875 systemd-networkd[1458]: eth0: Gained IPv6LL Jul 10 00:39:34.691854 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:39:34.693897 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:39:34.696300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:39:34.698776 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:39:34.722676 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:39:35.527260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:39:35.528340 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:39:35.529405 systemd[1]: Startup finished in 2.527s (kernel) + 6.501s (initrd) + 5.262s (userspace) = 14.291s. Jul 10 00:39:35.569952 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:39:36.046512 kubelet[1702]: E0710 00:39:36.046434 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:39:36.049934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:39:36.050118 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:39:36.050663 systemd[1]: kubelet.service: Consumed 828ms CPU time, 267.9M memory peak. Jul 10 00:39:37.146136 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:39:37.147267 systemd[1]: Started sshd@0-172.238.161.214:22-139.178.89.65:60834.service - OpenSSH per-connection server daemon (139.178.89.65:60834). Jul 10 00:39:37.489747 sshd[1714]: Accepted publickey for core from 139.178.89.65 port 60834 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:39:37.491929 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:39:37.499276 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:39:37.500560 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:39:37.507733 systemd-logind[1548]: New session 1 of user core. Jul 10 00:39:37.519693 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:39:37.522718 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:39:37.537694 (systemd)[1718]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:39:37.540307 systemd-logind[1548]: New session c1 of user core. Jul 10 00:39:37.671616 systemd[1718]: Queued start job for default target default.target. Jul 10 00:39:37.678723 systemd[1718]: Created slice app.slice - User Application Slice. Jul 10 00:39:37.678752 systemd[1718]: Reached target paths.target - Paths. Jul 10 00:39:37.678799 systemd[1718]: Reached target timers.target - Timers. Jul 10 00:39:37.680126 systemd[1718]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:39:37.689438 systemd[1718]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:39:37.689486 systemd[1718]: Reached target sockets.target - Sockets. Jul 10 00:39:37.689521 systemd[1718]: Reached target basic.target - Basic System. Jul 10 00:39:37.689559 systemd[1718]: Reached target default.target - Main User Target. Jul 10 00:39:37.689587 systemd[1718]: Startup finished in 142ms. Jul 10 00:39:37.689916 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:39:37.696727 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:39:37.948432 systemd[1]: Started sshd@1-172.238.161.214:22-139.178.89.65:60836.service - OpenSSH per-connection server daemon (139.178.89.65:60836). Jul 10 00:39:38.274833 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 60836 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:39:38.276330 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:39:38.281524 systemd-logind[1548]: New session 2 of user core. Jul 10 00:39:38.294756 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:39:38.519795 sshd[1731]: Connection closed by 139.178.89.65 port 60836 Jul 10 00:39:38.520195 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jul 10 00:39:38.523915 systemd-logind[1548]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:39:38.524714 systemd[1]: sshd@1-172.238.161.214:22-139.178.89.65:60836.service: Deactivated successfully. Jul 10 00:39:38.527090 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:39:38.529110 systemd-logind[1548]: Removed session 2. Jul 10 00:39:38.578639 systemd[1]: Started sshd@2-172.238.161.214:22-139.178.89.65:60852.service - OpenSSH per-connection server daemon (139.178.89.65:60852). Jul 10 00:39:38.909747 sshd[1737]: Accepted publickey for core from 139.178.89.65 port 60852 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:39:38.911104 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:39:38.915434 systemd-logind[1548]: New session 3 of user core. Jul 10 00:39:38.922723 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:39:39.149272 sshd[1739]: Connection closed by 139.178.89.65 port 60852 Jul 10 00:39:39.149755 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jul 10 00:39:39.153353 systemd[1]: sshd@2-172.238.161.214:22-139.178.89.65:60852.service: Deactivated successfully. Jul 10 00:39:39.155087 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:39:39.156458 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:39:39.157734 systemd-logind[1548]: Removed session 3. Jul 10 00:39:39.213779 systemd[1]: Started sshd@3-172.238.161.214:22-139.178.89.65:60868.service - OpenSSH per-connection server daemon (139.178.89.65:60868). Jul 10 00:39:39.551258 sshd[1745]: Accepted publickey for core from 139.178.89.65 port 60868 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:39:39.552481 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:39:39.556685 systemd-logind[1548]: New session 4 of user core. Jul 10 00:39:39.561771 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:39:39.796446 sshd[1747]: Connection closed by 139.178.89.65 port 60868 Jul 10 00:39:39.796980 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Jul 10 00:39:39.800378 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:39:39.801119 systemd[1]: sshd@3-172.238.161.214:22-139.178.89.65:60868.service: Deactivated successfully. Jul 10 00:39:39.803073 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:39:39.804609 systemd-logind[1548]: Removed session 4. Jul 10 00:39:39.859505 systemd[1]: Started sshd@4-172.238.161.214:22-139.178.89.65:59502.service - OpenSSH per-connection server daemon (139.178.89.65:59502). Jul 10 00:39:40.201320 sshd[1753]: Accepted publickey for core from 139.178.89.65 port 59502 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:39:40.202190 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:39:40.209356 systemd-logind[1548]: New session 5 of user core. Jul 10 00:39:40.213791 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:39:40.407884 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:39:40.408197 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:39:40.424469 sudo[1756]: pam_unix(sudo:session): session closed for user root Jul 10 00:39:40.476655 sshd[1755]: Connection closed by 139.178.89.65 port 59502 Jul 10 00:39:40.477314 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jul 10 00:39:40.480814 systemd[1]: sshd@4-172.238.161.214:22-139.178.89.65:59502.service: Deactivated successfully. Jul 10 00:39:40.482613 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:39:40.485312 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:39:40.486318 systemd-logind[1548]: Removed session 5. Jul 10 00:39:40.538177 systemd[1]: Started sshd@5-172.238.161.214:22-139.178.89.65:59516.service - OpenSSH per-connection server daemon (139.178.89.65:59516). Jul 10 00:39:40.883466 sshd[1762]: Accepted publickey for core from 139.178.89.65 port 59516 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:39:40.885025 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:39:40.889486 systemd-logind[1548]: New session 6 of user core. Jul 10 00:39:40.893833 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:39:41.084745 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:39:41.085030 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:39:41.089654 sudo[1766]: pam_unix(sudo:session): session closed for user root Jul 10 00:39:41.095029 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:39:41.095312 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:39:41.104429 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:39:41.149113 augenrules[1788]: No rules Jul 10 00:39:41.150980 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:39:41.151237 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:39:41.152496 sudo[1765]: pam_unix(sudo:session): session closed for user root Jul 10 00:39:41.204374 sshd[1764]: Connection closed by 139.178.89.65 port 59516 Jul 10 00:39:41.204927 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jul 10 00:39:41.208913 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:39:41.209558 systemd[1]: sshd@5-172.238.161.214:22-139.178.89.65:59516.service: Deactivated successfully. Jul 10 00:39:41.211571 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:39:41.213659 systemd-logind[1548]: Removed session 6. Jul 10 00:39:41.267935 systemd[1]: Started sshd@6-172.238.161.214:22-139.178.89.65:59524.service - OpenSSH per-connection server daemon (139.178.89.65:59524). Jul 10 00:39:41.619911 sshd[1797]: Accepted publickey for core from 139.178.89.65 port 59524 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:39:41.621355 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:39:41.626591 systemd-logind[1548]: New session 7 of user core. Jul 10 00:39:41.632749 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:39:41.823004 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:39:41.823313 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:39:42.080706 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:39:42.094914 (dockerd)[1817]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:39:42.275963 dockerd[1817]: time="2025-07-10T00:39:42.275596656Z" level=info msg="Starting up" Jul 10 00:39:42.278430 dockerd[1817]: time="2025-07-10T00:39:42.278384438Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:39:42.305924 systemd[1]: var-lib-docker-metacopy\x2dcheck243579334-merged.mount: Deactivated successfully. Jul 10 00:39:42.327703 dockerd[1817]: time="2025-07-10T00:39:42.327492467Z" level=info msg="Loading containers: start." Jul 10 00:39:42.336643 kernel: Initializing XFRM netlink socket Jul 10 00:39:42.550357 systemd-networkd[1458]: docker0: Link UP Jul 10 00:39:42.554061 dockerd[1817]: time="2025-07-10T00:39:42.554020134Z" level=info msg="Loading containers: done." Jul 10 00:39:42.566660 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3953629379-merged.mount: Deactivated successfully. Jul 10 00:39:42.568048 dockerd[1817]: time="2025-07-10T00:39:42.568023458Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:39:42.568110 dockerd[1817]: time="2025-07-10T00:39:42.568074538Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:39:42.568182 dockerd[1817]: time="2025-07-10T00:39:42.568167038Z" level=info msg="Initializing buildkit" Jul 10 00:39:42.588538 dockerd[1817]: time="2025-07-10T00:39:42.588511558Z" level=info msg="Completed buildkit initialization" Jul 10 00:39:42.595131 dockerd[1817]: time="2025-07-10T00:39:42.595098335Z" level=info msg="Daemon has completed initialization" Jul 10 00:39:42.595248 dockerd[1817]: time="2025-07-10T00:39:42.595204605Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:39:42.595412 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:39:43.079562 containerd[1580]: time="2025-07-10T00:39:43.079525109Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 10 00:39:43.953591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201505968.mount: Deactivated successfully. Jul 10 00:39:45.141877 containerd[1580]: time="2025-07-10T00:39:45.141803901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:45.142989 containerd[1580]: time="2025-07-10T00:39:45.142955742Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 10 00:39:45.143775 containerd[1580]: time="2025-07-10T00:39:45.143726213Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:45.146153 containerd[1580]: time="2025-07-10T00:39:45.146118305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:45.147240 containerd[1580]: time="2025-07-10T00:39:45.147010886Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.067451227s" Jul 10 00:39:45.147240 containerd[1580]: time="2025-07-10T00:39:45.147042756Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 10 00:39:45.147785 containerd[1580]: time="2025-07-10T00:39:45.147762667Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 10 00:39:46.300848 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:39:46.302595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:39:46.480749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:39:46.487470 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:39:46.523005 kubelet[2085]: E0710 00:39:46.522954 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:39:46.528369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:39:46.528555 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:39:46.528959 systemd[1]: kubelet.service: Consumed 180ms CPU time, 107.6M memory peak. Jul 10 00:39:46.757844 containerd[1580]: time="2025-07-10T00:39:46.757479356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:46.758518 containerd[1580]: time="2025-07-10T00:39:46.758486457Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 10 00:39:46.760644 containerd[1580]: time="2025-07-10T00:39:46.759155058Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:46.761852 containerd[1580]: time="2025-07-10T00:39:46.761821501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:46.762606 containerd[1580]: time="2025-07-10T00:39:46.762559331Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.614770144s" Jul 10 00:39:46.762678 containerd[1580]: time="2025-07-10T00:39:46.762607291Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 10 00:39:46.763296 containerd[1580]: time="2025-07-10T00:39:46.763269592Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 10 00:39:48.193712 containerd[1580]: time="2025-07-10T00:39:48.193606772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:48.194575 containerd[1580]: time="2025-07-10T00:39:48.194483873Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 10 00:39:48.195181 containerd[1580]: time="2025-07-10T00:39:48.195151824Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:48.197715 containerd[1580]: time="2025-07-10T00:39:48.197673946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:48.198789 containerd[1580]: time="2025-07-10T00:39:48.198687757Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.435389305s" Jul 10 00:39:48.198789 containerd[1580]: time="2025-07-10T00:39:48.198714537Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 10 00:39:48.199559 containerd[1580]: time="2025-07-10T00:39:48.199507128Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 00:39:49.321665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3809979392.mount: Deactivated successfully. Jul 10 00:39:49.646131 containerd[1580]: time="2025-07-10T00:39:49.645910464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:49.646915 containerd[1580]: time="2025-07-10T00:39:49.646524665Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 10 00:39:49.647266 containerd[1580]: time="2025-07-10T00:39:49.647236765Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:49.648367 containerd[1580]: time="2025-07-10T00:39:49.648347357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:49.648918 containerd[1580]: time="2025-07-10T00:39:49.648874387Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.449214129s" Jul 10 00:39:49.648918 containerd[1580]: time="2025-07-10T00:39:49.648910727Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 10 00:39:49.649382 containerd[1580]: time="2025-07-10T00:39:49.649343368Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 10 00:39:50.414355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1121010582.mount: Deactivated successfully. Jul 10 00:39:51.146698 containerd[1580]: time="2025-07-10T00:39:51.146613575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:51.147597 containerd[1580]: time="2025-07-10T00:39:51.147517395Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 10 00:39:51.148123 containerd[1580]: time="2025-07-10T00:39:51.148093016Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:51.151210 containerd[1580]: time="2025-07-10T00:39:51.150162698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:51.151210 containerd[1580]: time="2025-07-10T00:39:51.151013889Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.501641771s" Jul 10 00:39:51.151210 containerd[1580]: time="2025-07-10T00:39:51.151050339Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 10 00:39:51.151763 containerd[1580]: time="2025-07-10T00:39:51.151732180Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:39:51.807980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1815529372.mount: Deactivated successfully. Jul 10 00:39:51.812411 containerd[1580]: time="2025-07-10T00:39:51.812359860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:39:51.813098 containerd[1580]: time="2025-07-10T00:39:51.812890671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 10 00:39:51.813614 containerd[1580]: time="2025-07-10T00:39:51.813585901Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:39:51.815079 containerd[1580]: time="2025-07-10T00:39:51.815052273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:39:51.815604 containerd[1580]: time="2025-07-10T00:39:51.815572623Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 663.816273ms" Jul 10 00:39:51.815604 containerd[1580]: time="2025-07-10T00:39:51.815601443Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:39:51.816147 containerd[1580]: time="2025-07-10T00:39:51.816121074Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 10 00:39:52.532864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount744995325.mount: Deactivated successfully. Jul 10 00:39:54.104151 containerd[1580]: time="2025-07-10T00:39:54.103474241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:54.105759 containerd[1580]: time="2025-07-10T00:39:54.104518172Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 10 00:39:54.105759 containerd[1580]: time="2025-07-10T00:39:54.105006362Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:54.110229 containerd[1580]: time="2025-07-10T00:39:54.110161047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:39:54.112085 containerd[1580]: time="2025-07-10T00:39:54.111950799Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.295801445s" Jul 10 00:39:54.112085 containerd[1580]: time="2025-07-10T00:39:54.111978219Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 10 00:39:56.268111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:39:56.268383 systemd[1]: kubelet.service: Consumed 180ms CPU time, 107.6M memory peak. Jul 10 00:39:56.270486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:39:56.291891 systemd[1]: Reload requested from client PID 2245 ('systemctl') (unit session-7.scope)... Jul 10 00:39:56.291907 systemd[1]: Reloading... Jul 10 00:39:56.417654 zram_generator::config[2287]: No configuration found. Jul 10 00:39:56.515383 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:39:56.604697 systemd[1]: Reloading finished in 312 ms. Jul 10 00:39:56.654078 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:39:56.654191 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:39:56.654461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:39:56.654501 systemd[1]: kubelet.service: Consumed 123ms CPU time, 98.3M memory peak. Jul 10 00:39:56.655967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:39:56.808615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:39:56.816908 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:39:56.851410 kubelet[2343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:39:56.851410 kubelet[2343]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:39:56.851410 kubelet[2343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:39:56.851684 kubelet[2343]: I0710 00:39:56.851439 2343 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:39:57.641658 kubelet[2343]: I0710 00:39:57.639881 2343 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:39:57.641658 kubelet[2343]: I0710 00:39:57.639909 2343 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:39:57.641658 kubelet[2343]: I0710 00:39:57.640241 2343 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:39:57.668966 kubelet[2343]: I0710 00:39:57.668951 2343 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:39:57.669344 kubelet[2343]: E0710 00:39:57.669285 2343 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.238.161.214:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.238.161.214:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 00:39:57.677476 kubelet[2343]: I0710 00:39:57.677436 2343 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:39:57.682338 kubelet[2343]: I0710 00:39:57.682309 2343 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:39:57.682604 kubelet[2343]: I0710 00:39:57.682579 2343 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:39:57.682791 kubelet[2343]: I0710 00:39:57.682603 2343 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-161-214","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:39:57.682791 kubelet[2343]: I0710 00:39:57.682789 2343 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:39:57.682916 kubelet[2343]: I0710 00:39:57.682799 2343 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:39:57.683676 kubelet[2343]: I0710 00:39:57.683651 2343 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:39:57.686663 kubelet[2343]: I0710 00:39:57.686443 2343 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:39:57.686663 kubelet[2343]: I0710 00:39:57.686465 2343 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:39:57.688199 kubelet[2343]: I0710 00:39:57.687789 2343 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:39:57.688199 kubelet[2343]: I0710 00:39:57.687810 2343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:39:57.692392 kubelet[2343]: E0710 00:39:57.692371 2343 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.238.161.214:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-238-161-214&limit=500&resourceVersion=0\": dial tcp 172.238.161.214:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:39:57.694807 kubelet[2343]: E0710 00:39:57.694764 2343 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.238.161.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.238.161.214:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:39:57.694935 kubelet[2343]: I0710 00:39:57.694921 2343 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:39:57.695368 kubelet[2343]: I0710 00:39:57.695349 2343 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:39:57.696515 kubelet[2343]: W0710 00:39:57.696467 2343 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:39:57.702691 kubelet[2343]: I0710 00:39:57.701276 2343 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:39:57.702691 kubelet[2343]: I0710 00:39:57.701327 2343 server.go:1289] "Started kubelet" Jul 10 00:39:57.704473 kubelet[2343]: I0710 00:39:57.704424 2343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:39:57.706791 kubelet[2343]: I0710 00:39:57.706764 2343 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:39:57.706954 kubelet[2343]: E0710 00:39:57.706923 2343 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-161-214\" not found" Jul 10 00:39:57.712472 kubelet[2343]: I0710 00:39:57.712452 2343 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:39:57.712613 kubelet[2343]: I0710 00:39:57.712535 2343 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:39:57.713648 kubelet[2343]: I0710 00:39:57.706530 2343 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:39:57.714176 kubelet[2343]: I0710 00:39:57.714148 2343 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:39:57.714893 kubelet[2343]: I0710 00:39:57.714850 2343 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:39:57.715073 kubelet[2343]: I0710 00:39:57.715044 2343 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:39:57.715380 kubelet[2343]: E0710 00:39:57.715340 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.161.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-161-214?timeout=10s\": dial tcp 172.238.161.214:6443: connect: connection refused" interval="200ms" Jul 10 00:39:57.716720 kubelet[2343]: I0710 00:39:57.716702 2343 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:39:57.717684 kubelet[2343]: E0710 00:39:57.715947 2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.238.161.214:6443/api/v1/namespaces/default/events\": dial tcp 172.238.161.214:6443: connect: connection refused" event="&Event{ObjectMeta:{172-238-161-214.1850bcf0db1b5226 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-238-161-214,UID:172-238-161-214,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-238-161-214,},FirstTimestamp:2025-07-10 00:39:57.701296678 +0000 UTC m=+0.880557702,LastTimestamp:2025-07-10 00:39:57.701296678 +0000 UTC m=+0.880557702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-238-161-214,}" Jul 10 00:39:57.717684 kubelet[2343]: I0710 00:39:57.717107 2343 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:39:57.717684 kubelet[2343]: I0710 00:39:57.717157 2343 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:39:57.718364 kubelet[2343]: E0710 00:39:57.718344 2343 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.238.161.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.238.161.214:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:39:57.718531 kubelet[2343]: I0710 00:39:57.718517 2343 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:39:57.718602 kubelet[2343]: I0710 00:39:57.718593 2343 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:39:57.730486 kubelet[2343]: I0710 00:39:57.730455 2343 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:39:57.730486 kubelet[2343]: I0710 00:39:57.730480 2343 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:39:57.730556 kubelet[2343]: I0710 00:39:57.730493 2343 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:39:57.730556 kubelet[2343]: I0710 00:39:57.730500 2343 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:39:57.730556 kubelet[2343]: E0710 00:39:57.730534 2343 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:39:57.734758 kubelet[2343]: E0710 00:39:57.734721 2343 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.238.161.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.238.161.214:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:39:57.735250 kubelet[2343]: E0710 00:39:57.735229 2343 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:39:57.745311 kubelet[2343]: I0710 00:39:57.745296 2343 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:39:57.745311 kubelet[2343]: I0710 00:39:57.745307 2343 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:39:57.745368 kubelet[2343]: I0710 00:39:57.745329 2343 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:39:57.748370 kubelet[2343]: I0710 00:39:57.748352 2343 policy_none.go:49] "None policy: Start" Jul 10 00:39:57.748370 kubelet[2343]: I0710 00:39:57.748369 2343 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:39:57.748429 kubelet[2343]: I0710 00:39:57.748380 2343 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:39:57.753526 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:39:57.768482 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:39:57.771375 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:39:57.778409 kubelet[2343]: E0710 00:39:57.778395 2343 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:39:57.778903 kubelet[2343]: I0710 00:39:57.778889 2343 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:39:57.778980 kubelet[2343]: I0710 00:39:57.778954 2343 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:39:57.779315 kubelet[2343]: I0710 00:39:57.779291 2343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:39:57.780508 kubelet[2343]: E0710 00:39:57.780482 2343 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:39:57.780738 kubelet[2343]: E0710 00:39:57.780715 2343 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-238-161-214\" not found" Jul 10 00:39:57.845580 systemd[1]: Created slice kubepods-burstable-pod15d8ac9c68bb8544985ff3d9a7251de2.slice - libcontainer container kubepods-burstable-pod15d8ac9c68bb8544985ff3d9a7251de2.slice. Jul 10 00:39:57.860281 kubelet[2343]: E0710 00:39:57.860173 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-214\" not found" node="172-238-161-214" Jul 10 00:39:57.862806 systemd[1]: Created slice kubepods-burstable-podc65575d988399338ed084a07f92ae354.slice - libcontainer container kubepods-burstable-podc65575d988399338ed084a07f92ae354.slice. Jul 10 00:39:57.873699 kubelet[2343]: E0710 00:39:57.873674 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-214\" not found" node="172-238-161-214" Jul 10 00:39:57.876065 systemd[1]: Created slice kubepods-burstable-pod6ed64e1401f1bc101ee796ec07220606.slice - libcontainer container kubepods-burstable-pod6ed64e1401f1bc101ee796ec07220606.slice. Jul 10 00:39:57.877888 kubelet[2343]: E0710 00:39:57.877850 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-214\" not found" node="172-238-161-214" Jul 10 00:39:57.881119 kubelet[2343]: I0710 00:39:57.881096 2343 kubelet_node_status.go:75] "Attempting to register node" node="172-238-161-214" Jul 10 00:39:57.881445 kubelet[2343]: E0710 00:39:57.881420 2343 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.161.214:6443/api/v1/nodes\": dial tcp 172.238.161.214:6443: connect: connection refused" node="172-238-161-214" Jul 10 00:39:57.916064 kubelet[2343]: E0710 00:39:57.915990 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.161.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-161-214?timeout=10s\": dial tcp 172.238.161.214:6443: connect: connection refused" interval="400ms" Jul 10 00:39:58.018777 kubelet[2343]: I0710 00:39:58.018650 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15d8ac9c68bb8544985ff3d9a7251de2-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-161-214\" (UID: \"15d8ac9c68bb8544985ff3d9a7251de2\") " pod="kube-system/kube-controller-manager-172-238-161-214" Jul 10 00:39:58.018777 kubelet[2343]: I0710 00:39:58.018681 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ed64e1401f1bc101ee796ec07220606-k8s-certs\") pod \"kube-apiserver-172-238-161-214\" (UID: \"6ed64e1401f1bc101ee796ec07220606\") " pod="kube-system/kube-apiserver-172-238-161-214" Jul 10 00:39:58.018777 kubelet[2343]: I0710 00:39:58.018699 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ed64e1401f1bc101ee796ec07220606-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-161-214\" (UID: \"6ed64e1401f1bc101ee796ec07220606\") " pod="kube-system/kube-apiserver-172-238-161-214" Jul 10 00:39:58.018777 kubelet[2343]: I0710 00:39:58.018714 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15d8ac9c68bb8544985ff3d9a7251de2-ca-certs\") pod \"kube-controller-manager-172-238-161-214\" (UID: \"15d8ac9c68bb8544985ff3d9a7251de2\") " pod="kube-system/kube-controller-manager-172-238-161-214" Jul 10 00:39:58.018777 kubelet[2343]: I0710 00:39:58.018726 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15d8ac9c68bb8544985ff3d9a7251de2-k8s-certs\") pod \"kube-controller-manager-172-238-161-214\" (UID: \"15d8ac9c68bb8544985ff3d9a7251de2\") " pod="kube-system/kube-controller-manager-172-238-161-214" Jul 10 00:39:58.018904 kubelet[2343]: I0710 00:39:58.018758 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15d8ac9c68bb8544985ff3d9a7251de2-kubeconfig\") pod \"kube-controller-manager-172-238-161-214\" (UID: \"15d8ac9c68bb8544985ff3d9a7251de2\") " pod="kube-system/kube-controller-manager-172-238-161-214" Jul 10 00:39:58.018904 kubelet[2343]: I0710 00:39:58.018821 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c65575d988399338ed084a07f92ae354-kubeconfig\") pod \"kube-scheduler-172-238-161-214\" (UID: \"c65575d988399338ed084a07f92ae354\") " pod="kube-system/kube-scheduler-172-238-161-214" Jul 10 00:39:58.018904 kubelet[2343]: I0710 00:39:58.018847 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ed64e1401f1bc101ee796ec07220606-ca-certs\") pod \"kube-apiserver-172-238-161-214\" (UID: \"6ed64e1401f1bc101ee796ec07220606\") " pod="kube-system/kube-apiserver-172-238-161-214" Jul 10 00:39:58.018904 kubelet[2343]: I0710 00:39:58.018867 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/15d8ac9c68bb8544985ff3d9a7251de2-flexvolume-dir\") pod \"kube-controller-manager-172-238-161-214\" (UID: \"15d8ac9c68bb8544985ff3d9a7251de2\") " pod="kube-system/kube-controller-manager-172-238-161-214" Jul 10 00:39:58.083070 kubelet[2343]: I0710 00:39:58.082820 2343 kubelet_node_status.go:75] "Attempting to register node" node="172-238-161-214" Jul 10 00:39:58.083166 kubelet[2343]: E0710 00:39:58.083128 2343 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.161.214:6443/api/v1/nodes\": dial tcp 172.238.161.214:6443: connect: connection refused" node="172-238-161-214" Jul 10 00:39:58.161293 kubelet[2343]: E0710 00:39:58.161267 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:39:58.162356 containerd[1580]: time="2025-07-10T00:39:58.162312849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-161-214,Uid:15d8ac9c68bb8544985ff3d9a7251de2,Namespace:kube-system,Attempt:0,}" Jul 10 00:39:58.174012 kubelet[2343]: E0710 00:39:58.173958 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:39:58.174542 containerd[1580]: time="2025-07-10T00:39:58.174517291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-161-214,Uid:c65575d988399338ed084a07f92ae354,Namespace:kube-system,Attempt:0,}" Jul 10 00:39:58.178959 kubelet[2343]: E0710 00:39:58.178929 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:39:58.191289 containerd[1580]: time="2025-07-10T00:39:58.190460177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-161-214,Uid:6ed64e1401f1bc101ee796ec07220606,Namespace:kube-system,Attempt:0,}" Jul 10 00:39:58.192688 containerd[1580]: time="2025-07-10T00:39:58.192666569Z" level=info msg="connecting to shim a23c420652e64dbf85d2d2fe44d676f52d9afaf9ee7007e76e8806201cdbcba6" address="unix:///run/containerd/s/a8cd7f1d9a1fb72a57dee795a954b415b41cc612a6fdb37fbbc890c35cca904d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:39:58.202710 containerd[1580]: time="2025-07-10T00:39:58.202688269Z" level=info msg="connecting to shim 52833a46d86eb5912232afbd5d6f0a584ae658e11af24483a4924f307fece5ff" address="unix:///run/containerd/s/7b5d43197e7cec344376f2140f0e895122bcd4c0d03ee3794aaa0868965c391f" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:39:58.231878 systemd[1]: Started cri-containerd-a23c420652e64dbf85d2d2fe44d676f52d9afaf9ee7007e76e8806201cdbcba6.scope - libcontainer container a23c420652e64dbf85d2d2fe44d676f52d9afaf9ee7007e76e8806201cdbcba6. Jul 10 00:39:58.235096 containerd[1580]: time="2025-07-10T00:39:58.235051811Z" level=info msg="connecting to shim af8e18351f8bd60f807b8e9892d0133f3395299d38e4743df6254dfe1c80e2f7" address="unix:///run/containerd/s/655d00598a8861a20945053651119f67204f9088fa2bcdc68e0418a77aaaee37" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:39:58.241828 systemd[1]: Started cri-containerd-52833a46d86eb5912232afbd5d6f0a584ae658e11af24483a4924f307fece5ff.scope - libcontainer container 52833a46d86eb5912232afbd5d6f0a584ae658e11af24483a4924f307fece5ff. Jul 10 00:39:58.267735 systemd[1]: Started cri-containerd-af8e18351f8bd60f807b8e9892d0133f3395299d38e4743df6254dfe1c80e2f7.scope - libcontainer container af8e18351f8bd60f807b8e9892d0133f3395299d38e4743df6254dfe1c80e2f7. Jul 10 00:39:58.295057 containerd[1580]: time="2025-07-10T00:39:58.294994501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-161-214,Uid:15d8ac9c68bb8544985ff3d9a7251de2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a23c420652e64dbf85d2d2fe44d676f52d9afaf9ee7007e76e8806201cdbcba6\"" Jul 10 00:39:58.296294 kubelet[2343]: E0710 00:39:58.296250 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:39:58.301531 containerd[1580]: time="2025-07-10T00:39:58.300829947Z" level=info msg="CreateContainer within sandbox \"a23c420652e64dbf85d2d2fe44d676f52d9afaf9ee7007e76e8806201cdbcba6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:39:58.306310 containerd[1580]: time="2025-07-10T00:39:58.306295433Z" level=info msg="Container 90f933089d07b26e13fa37793bb910fe94613dc96c25e518b173b8076bcbf3c7: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:39:58.316578 kubelet[2343]: E0710 00:39:58.316556 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.161.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-161-214?timeout=10s\": dial tcp 172.238.161.214:6443: connect: connection refused" interval="800ms" Jul 10 00:39:58.316669 containerd[1580]: time="2025-07-10T00:39:58.316571243Z" level=info msg="CreateContainer within sandbox \"a23c420652e64dbf85d2d2fe44d676f52d9afaf9ee7007e76e8806201cdbcba6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"90f933089d07b26e13fa37793bb910fe94613dc96c25e518b173b8076bcbf3c7\"" Jul 10 00:39:58.317155 containerd[1580]: time="2025-07-10T00:39:58.317135703Z" level=info msg="StartContainer for \"90f933089d07b26e13fa37793bb910fe94613dc96c25e518b173b8076bcbf3c7\"" Jul 10 00:39:58.321983 containerd[1580]: time="2025-07-10T00:39:58.321895018Z" level=info msg="connecting to shim 90f933089d07b26e13fa37793bb910fe94613dc96c25e518b173b8076bcbf3c7" address="unix:///run/containerd/s/a8cd7f1d9a1fb72a57dee795a954b415b41cc612a6fdb37fbbc890c35cca904d" protocol=ttrpc version=3 Jul 10 00:39:58.324302 containerd[1580]: time="2025-07-10T00:39:58.324275461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-161-214,Uid:c65575d988399338ed084a07f92ae354,Namespace:kube-system,Attempt:0,} returns sandbox id \"52833a46d86eb5912232afbd5d6f0a584ae658e11af24483a4924f307fece5ff\"" Jul 10 00:39:58.325220 kubelet[2343]: E0710 00:39:58.325198 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:39:58.331767 containerd[1580]: time="2025-07-10T00:39:58.331733538Z" level=info msg="CreateContainer within sandbox \"52833a46d86eb5912232afbd5d6f0a584ae658e11af24483a4924f307fece5ff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:39:58.343490 containerd[1580]: time="2025-07-10T00:39:58.343456770Z" level=info msg="Container a4e1e4a5d7ea3663bc90619401ab3f805eeb8e36d8d641e5841e47287b4f3efe: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:39:58.348121 containerd[1580]: time="2025-07-10T00:39:58.348045954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-161-214,Uid:6ed64e1401f1bc101ee796ec07220606,Namespace:kube-system,Attempt:0,} returns sandbox id \"af8e18351f8bd60f807b8e9892d0133f3395299d38e4743df6254dfe1c80e2f7\"" Jul 10 00:39:58.349491 kubelet[2343]: E0710 00:39:58.349452 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:39:58.352301 containerd[1580]: time="2025-07-10T00:39:58.351983028Z" level=info msg="CreateContainer within sandbox \"52833a46d86eb5912232afbd5d6f0a584ae658e11af24483a4924f307fece5ff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a4e1e4a5d7ea3663bc90619401ab3f805eeb8e36d8d641e5841e47287b4f3efe\"" Jul 10 00:39:58.353657 containerd[1580]: time="2025-07-10T00:39:58.353578060Z" level=info msg="StartContainer for \"a4e1e4a5d7ea3663bc90619401ab3f805eeb8e36d8d641e5841e47287b4f3efe\"" Jul 10 00:39:58.355870 containerd[1580]: time="2025-07-10T00:39:58.355190521Z" level=info msg="CreateContainer within sandbox \"af8e18351f8bd60f807b8e9892d0133f3395299d38e4743df6254dfe1c80e2f7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:39:58.358461 containerd[1580]: time="2025-07-10T00:39:58.358264395Z" level=info msg="connecting to shim a4e1e4a5d7ea3663bc90619401ab3f805eeb8e36d8d641e5841e47287b4f3efe" address="unix:///run/containerd/s/7b5d43197e7cec344376f2140f0e895122bcd4c0d03ee3794aaa0868965c391f" protocol=ttrpc version=3 Jul 10 00:39:58.359892 systemd[1]: Started cri-containerd-90f933089d07b26e13fa37793bb910fe94613dc96c25e518b173b8076bcbf3c7.scope - libcontainer container 90f933089d07b26e13fa37793bb910fe94613dc96c25e518b173b8076bcbf3c7. Jul 10 00:39:58.363476 containerd[1580]: time="2025-07-10T00:39:58.363456400Z" level=info msg="Container a5800695c1b63cb5769f38923c122422bd1e91f746b027b4f077e5eb4b775c1d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:39:58.388762 systemd[1]: Started cri-containerd-a4e1e4a5d7ea3663bc90619401ab3f805eeb8e36d8d641e5841e47287b4f3efe.scope - libcontainer container a4e1e4a5d7ea3663bc90619401ab3f805eeb8e36d8d641e5841e47287b4f3efe. Jul 10 00:39:58.390774 containerd[1580]: time="2025-07-10T00:39:58.390666177Z" level=info msg="CreateContainer within sandbox \"af8e18351f8bd60f807b8e9892d0133f3395299d38e4743df6254dfe1c80e2f7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a5800695c1b63cb5769f38923c122422bd1e91f746b027b4f077e5eb4b775c1d\"" Jul 10 00:39:58.391377 containerd[1580]: time="2025-07-10T00:39:58.391320988Z" level=info msg="StartContainer for \"a5800695c1b63cb5769f38923c122422bd1e91f746b027b4f077e5eb4b775c1d\"" Jul 10 00:39:58.392686 containerd[1580]: time="2025-07-10T00:39:58.392662369Z" level=info msg="connecting to shim a5800695c1b63cb5769f38923c122422bd1e91f746b027b4f077e5eb4b775c1d" address="unix:///run/containerd/s/655d00598a8861a20945053651119f67204f9088fa2bcdc68e0418a77aaaee37" protocol=ttrpc version=3 Jul 10 00:39:58.417920 systemd[1]: Started cri-containerd-a5800695c1b63cb5769f38923c122422bd1e91f746b027b4f077e5eb4b775c1d.scope - libcontainer container a5800695c1b63cb5769f38923c122422bd1e91f746b027b4f077e5eb4b775c1d. Jul 10 00:39:58.439599 containerd[1580]: time="2025-07-10T00:39:58.439512076Z" level=info msg="StartContainer for \"90f933089d07b26e13fa37793bb910fe94613dc96c25e518b173b8076bcbf3c7\" returns successfully" Jul 10 00:39:58.467766 containerd[1580]: time="2025-07-10T00:39:58.467683734Z" level=info msg="StartContainer for \"a4e1e4a5d7ea3663bc90619401ab3f805eeb8e36d8d641e5841e47287b4f3efe\" returns successfully" Jul 10 00:39:58.486721 kubelet[2343]: I0710 00:39:58.486685 2343 kubelet_node_status.go:75] "Attempting to register node" node="172-238-161-214" Jul 10 00:39:58.487836 kubelet[2343]: E0710 00:39:58.487799 2343 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.161.214:6443/api/v1/nodes\": dial tcp 172.238.161.214:6443: connect: connection refused" node="172-238-161-214" Jul 10 00:39:58.490202 containerd[1580]: time="2025-07-10T00:39:58.490166486Z" level=info msg="StartContainer for \"a5800695c1b63cb5769f38923c122422bd1e91f746b027b4f077e5eb4b775c1d\" returns successfully" Jul 10 00:39:58.746094 kubelet[2343]: E0710 00:39:58.745853 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-214\" not found" node="172-238-161-214" Jul 10 00:39:58.746094 kubelet[2343]: E0710 00:39:58.745965 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:39:58.746515 kubelet[2343]: E0710 00:39:58.746500 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-214\" not found" node="172-238-161-214" Jul 10 00:39:58.746690 kubelet[2343]: E0710 00:39:58.746677 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:39:58.748960 kubelet[2343]: E0710 00:39:58.748814 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-214\" not found" node="172-238-161-214" Jul 10 00:39:58.748960 kubelet[2343]: E0710 00:39:58.748908 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:39:59.289883 kubelet[2343]: I0710 00:39:59.289836 2343 kubelet_node_status.go:75] "Attempting to register node" node="172-238-161-214" Jul 10 00:39:59.754949 kubelet[2343]: E0710 00:39:59.754771 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-214\" not found" node="172-238-161-214" Jul 10 00:39:59.754949 kubelet[2343]: E0710 00:39:59.754883 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:39:59.758009 kubelet[2343]: E0710 00:39:59.757852 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-161-214\" not found" node="172-238-161-214" Jul 10 00:39:59.758009 kubelet[2343]: E0710 00:39:59.757928 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:39:59.809091 kubelet[2343]: E0710 00:39:59.809068 2343 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-238-161-214\" not found" node="172-238-161-214" Jul 10 00:39:59.845458 kubelet[2343]: I0710 00:39:59.845405 2343 kubelet_node_status.go:78] "Successfully registered node" node="172-238-161-214" Jul 10 00:39:59.845458 kubelet[2343]: E0710 00:39:59.845425 2343 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-238-161-214\": node \"172-238-161-214\" not found" Jul 10 00:39:59.907821 kubelet[2343]: I0710 00:39:59.907687 2343 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-161-214" Jul 10 00:39:59.920058 kubelet[2343]: E0710 00:39:59.920011 2343 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-161-214\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-238-161-214" Jul 10 00:39:59.920058 kubelet[2343]: I0710 00:39:59.920054 2343 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-161-214" Jul 10 00:39:59.921596 kubelet[2343]: E0710 00:39:59.921569 2343 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-161-214\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-238-161-214" Jul 10 00:39:59.921596 kubelet[2343]: I0710 00:39:59.921590 2343 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-161-214" Jul 10 00:39:59.927646 kubelet[2343]: E0710 00:39:59.927581 2343 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-161-214\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-238-161-214" Jul 10 00:40:00.693541 kubelet[2343]: I0710 00:40:00.693502 2343 apiserver.go:52] "Watching apiserver" Jul 10 00:40:00.717703 kubelet[2343]: I0710 00:40:00.717680 2343 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:40:01.767061 systemd[1]: Reload requested from client PID 2619 ('systemctl') (unit session-7.scope)... Jul 10 00:40:01.767083 systemd[1]: Reloading... Jul 10 00:40:01.878694 zram_generator::config[2663]: No configuration found. Jul 10 00:40:01.969846 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:40:02.090148 systemd[1]: Reloading finished in 322 ms. Jul 10 00:40:02.128443 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:40:02.138508 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:40:02.138839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:40:02.138906 systemd[1]: kubelet.service: Consumed 1.223s CPU time, 130.4M memory peak. Jul 10 00:40:02.141324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:40:02.313413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:40:02.322891 (kubelet)[2714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:40:02.367586 kubelet[2714]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:40:02.367586 kubelet[2714]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:40:02.367586 kubelet[2714]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:40:02.367586 kubelet[2714]: I0710 00:40:02.367375 2714 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:40:02.372473 kubelet[2714]: I0710 00:40:02.372446 2714 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:40:02.372473 kubelet[2714]: I0710 00:40:02.372464 2714 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:40:02.372731 kubelet[2714]: I0710 00:40:02.372708 2714 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:40:02.373543 kubelet[2714]: I0710 00:40:02.373522 2714 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 10 00:40:02.377569 kubelet[2714]: I0710 00:40:02.377427 2714 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:40:02.381687 kubelet[2714]: I0710 00:40:02.381150 2714 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:40:02.384399 kubelet[2714]: I0710 00:40:02.384388 2714 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:40:02.384692 kubelet[2714]: I0710 00:40:02.384670 2714 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:40:02.384828 kubelet[2714]: I0710 00:40:02.384737 2714 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-161-214","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:40:02.384920 kubelet[2714]: I0710 00:40:02.384912 2714 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:40:02.384964 kubelet[2714]: I0710 00:40:02.384957 2714 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:40:02.385032 kubelet[2714]: I0710 00:40:02.385025 2714 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:40:02.385193 kubelet[2714]: I0710 00:40:02.385182 2714 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:40:02.385247 kubelet[2714]: I0710 00:40:02.385237 2714 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:40:02.385308 kubelet[2714]: I0710 00:40:02.385298 2714 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:40:02.385360 kubelet[2714]: I0710 00:40:02.385351 2714 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:40:02.390128 kubelet[2714]: I0710 00:40:02.390111 2714 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:40:02.390539 kubelet[2714]: I0710 00:40:02.390527 2714 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:40:02.393991 kubelet[2714]: I0710 00:40:02.392399 2714 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:40:02.393991 kubelet[2714]: I0710 00:40:02.392425 2714 server.go:1289] "Started kubelet" Jul 10 00:40:02.394542 kubelet[2714]: I0710 00:40:02.394525 2714 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:40:02.395591 kubelet[2714]: I0710 00:40:02.395463 2714 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:40:02.397104 kubelet[2714]: I0710 00:40:02.397091 2714 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:40:02.397367 kubelet[2714]: I0710 00:40:02.397348 2714 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:40:02.399463 kubelet[2714]: I0710 00:40:02.399443 2714 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:40:02.399604 kubelet[2714]: E0710 00:40:02.399583 2714 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-161-214\" not found" Jul 10 00:40:02.402338 kubelet[2714]: I0710 00:40:02.400388 2714 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:40:02.402338 kubelet[2714]: I0710 00:40:02.400479 2714 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:40:02.402516 kubelet[2714]: I0710 00:40:02.402482 2714 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:40:02.403175 kubelet[2714]: I0710 00:40:02.402785 2714 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:40:02.405301 kubelet[2714]: I0710 00:40:02.404364 2714 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:40:02.405438 kubelet[2714]: I0710 00:40:02.405422 2714 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:40:02.408518 kubelet[2714]: I0710 00:40:02.408504 2714 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:40:02.412113 kubelet[2714]: I0710 00:40:02.412092 2714 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:40:02.413181 kubelet[2714]: I0710 00:40:02.413160 2714 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:40:02.413181 kubelet[2714]: I0710 00:40:02.413176 2714 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:40:02.413243 kubelet[2714]: I0710 00:40:02.413189 2714 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:40:02.413243 kubelet[2714]: I0710 00:40:02.413194 2714 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:40:02.413243 kubelet[2714]: E0710 00:40:02.413222 2714 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:40:02.451796 kubelet[2714]: I0710 00:40:02.451771 2714 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:40:02.451796 kubelet[2714]: I0710 00:40:02.451788 2714 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:40:02.451918 kubelet[2714]: I0710 00:40:02.451828 2714 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:40:02.452221 kubelet[2714]: I0710 00:40:02.451961 2714 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:40:02.452221 kubelet[2714]: I0710 00:40:02.451996 2714 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:40:02.452221 kubelet[2714]: I0710 00:40:02.452011 2714 policy_none.go:49] "None policy: Start" Jul 10 00:40:02.452221 kubelet[2714]: I0710 00:40:02.452021 2714 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:40:02.452221 kubelet[2714]: I0710 00:40:02.452031 2714 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:40:02.452221 kubelet[2714]: I0710 00:40:02.452131 2714 state_mem.go:75] "Updated machine memory state" Jul 10 00:40:02.458333 kubelet[2714]: E0710 00:40:02.458309 2714 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:40:02.458479 kubelet[2714]: I0710 00:40:02.458446 2714 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:40:02.458479 kubelet[2714]: I0710 00:40:02.458455 2714 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:40:02.460305 kubelet[2714]: I0710 00:40:02.460229 2714 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:40:02.463665 kubelet[2714]: E0710 00:40:02.463651 2714 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:40:02.514729 kubelet[2714]: I0710 00:40:02.514697 2714 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-161-214" Jul 10 00:40:02.515028 kubelet[2714]: I0710 00:40:02.515006 2714 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-161-214" Jul 10 00:40:02.515197 kubelet[2714]: I0710 00:40:02.515177 2714 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-161-214" Jul 10 00:40:02.561438 kubelet[2714]: I0710 00:40:02.561407 2714 kubelet_node_status.go:75] "Attempting to register node" node="172-238-161-214" Jul 10 00:40:02.568199 kubelet[2714]: I0710 00:40:02.568170 2714 kubelet_node_status.go:124] "Node was previously registered" node="172-238-161-214" Jul 10 00:40:02.568239 kubelet[2714]: I0710 00:40:02.568228 2714 kubelet_node_status.go:78] "Successfully registered node" node="172-238-161-214" Jul 10 00:40:02.601824 kubelet[2714]: I0710 00:40:02.601807 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ed64e1401f1bc101ee796ec07220606-k8s-certs\") pod \"kube-apiserver-172-238-161-214\" (UID: \"6ed64e1401f1bc101ee796ec07220606\") " pod="kube-system/kube-apiserver-172-238-161-214" Jul 10 00:40:02.601897 kubelet[2714]: I0710 00:40:02.601831 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/15d8ac9c68bb8544985ff3d9a7251de2-flexvolume-dir\") pod \"kube-controller-manager-172-238-161-214\" (UID: \"15d8ac9c68bb8544985ff3d9a7251de2\") " pod="kube-system/kube-controller-manager-172-238-161-214" Jul 10 00:40:02.601897 kubelet[2714]: I0710 00:40:02.601848 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15d8ac9c68bb8544985ff3d9a7251de2-k8s-certs\") pod \"kube-controller-manager-172-238-161-214\" (UID: \"15d8ac9c68bb8544985ff3d9a7251de2\") " pod="kube-system/kube-controller-manager-172-238-161-214" Jul 10 00:40:02.601897 kubelet[2714]: I0710 00:40:02.601863 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15d8ac9c68bb8544985ff3d9a7251de2-kubeconfig\") pod \"kube-controller-manager-172-238-161-214\" (UID: \"15d8ac9c68bb8544985ff3d9a7251de2\") " pod="kube-system/kube-controller-manager-172-238-161-214" Jul 10 00:40:02.601897 kubelet[2714]: I0710 00:40:02.601876 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15d8ac9c68bb8544985ff3d9a7251de2-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-161-214\" (UID: \"15d8ac9c68bb8544985ff3d9a7251de2\") " pod="kube-system/kube-controller-manager-172-238-161-214" Jul 10 00:40:02.601897 kubelet[2714]: I0710 00:40:02.601889 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ed64e1401f1bc101ee796ec07220606-ca-certs\") pod \"kube-apiserver-172-238-161-214\" (UID: \"6ed64e1401f1bc101ee796ec07220606\") " pod="kube-system/kube-apiserver-172-238-161-214" Jul 10 00:40:02.602004 kubelet[2714]: I0710 00:40:02.601902 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ed64e1401f1bc101ee796ec07220606-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-161-214\" (UID: \"6ed64e1401f1bc101ee796ec07220606\") " pod="kube-system/kube-apiserver-172-238-161-214" Jul 10 00:40:02.602004 kubelet[2714]: I0710 00:40:02.601915 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15d8ac9c68bb8544985ff3d9a7251de2-ca-certs\") pod \"kube-controller-manager-172-238-161-214\" (UID: \"15d8ac9c68bb8544985ff3d9a7251de2\") " pod="kube-system/kube-controller-manager-172-238-161-214" Jul 10 00:40:02.602004 kubelet[2714]: I0710 00:40:02.601929 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c65575d988399338ed084a07f92ae354-kubeconfig\") pod \"kube-scheduler-172-238-161-214\" (UID: \"c65575d988399338ed084a07f92ae354\") " pod="kube-system/kube-scheduler-172-238-161-214" Jul 10 00:40:02.819836 kubelet[2714]: E0710 00:40:02.819664 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:02.819979 kubelet[2714]: E0710 00:40:02.819949 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:02.820153 kubelet[2714]: E0710 00:40:02.820089 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:03.391646 kubelet[2714]: I0710 00:40:03.390634 2714 apiserver.go:52] "Watching apiserver" Jul 10 00:40:03.401090 kubelet[2714]: I0710 00:40:03.401057 2714 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:40:03.436251 kubelet[2714]: I0710 00:40:03.436222 2714 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-161-214" Jul 10 00:40:03.436594 kubelet[2714]: E0710 00:40:03.436564 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:03.439282 kubelet[2714]: I0710 00:40:03.437017 2714 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-161-214" Jul 10 00:40:03.442576 kubelet[2714]: E0710 00:40:03.442321 2714 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-161-214\" already exists" pod="kube-system/kube-scheduler-172-238-161-214" Jul 10 00:40:03.442576 kubelet[2714]: E0710 00:40:03.442502 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:03.444073 kubelet[2714]: E0710 00:40:03.444059 2714 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-161-214\" already exists" pod="kube-system/kube-apiserver-172-238-161-214" Jul 10 00:40:03.444227 kubelet[2714]: E0710 00:40:03.444213 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:03.465775 kubelet[2714]: I0710 00:40:03.465718 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-238-161-214" podStartSLOduration=1.465706751 podStartE2EDuration="1.465706751s" podCreationTimestamp="2025-07-10 00:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:40:03.465582681 +0000 UTC m=+1.137592239" watchObservedRunningTime="2025-07-10 00:40:03.465706751 +0000 UTC m=+1.137716299" Jul 10 00:40:03.465861 kubelet[2714]: I0710 00:40:03.465806 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-238-161-214" podStartSLOduration=1.465801981 podStartE2EDuration="1.465801981s" podCreationTimestamp="2025-07-10 00:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:40:03.458230343 +0000 UTC m=+1.130239891" watchObservedRunningTime="2025-07-10 00:40:03.465801981 +0000 UTC m=+1.137811539" Jul 10 00:40:03.561235 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 10 00:40:04.437340 kubelet[2714]: E0710 00:40:04.437313 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:04.438338 kubelet[2714]: E0710 00:40:04.438323 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:04.438490 kubelet[2714]: E0710 00:40:04.438478 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:05.439394 kubelet[2714]: E0710 00:40:05.439342 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:07.277538 kubelet[2714]: I0710 00:40:07.277507 2714 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:40:07.278265 kubelet[2714]: I0710 00:40:07.278089 2714 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:40:07.278298 containerd[1580]: time="2025-07-10T00:40:07.277858718Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:40:07.770401 kubelet[2714]: I0710 00:40:07.770304 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-238-161-214" podStartSLOduration=5.770270192 podStartE2EDuration="5.770270192s" podCreationTimestamp="2025-07-10 00:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:40:03.472814498 +0000 UTC m=+1.144824046" watchObservedRunningTime="2025-07-10 00:40:07.770270192 +0000 UTC m=+5.442279740" Jul 10 00:40:07.782954 systemd[1]: Created slice kubepods-besteffort-pod55940769_9a4b_4b04_9f1f_4eab04829f7c.slice - libcontainer container kubepods-besteffort-pod55940769_9a4b_4b04_9f1f_4eab04829f7c.slice. Jul 10 00:40:07.832698 kubelet[2714]: I0710 00:40:07.832672 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55940769-9a4b-4b04-9f1f-4eab04829f7c-xtables-lock\") pod \"kube-proxy-jg6z8\" (UID: \"55940769-9a4b-4b04-9f1f-4eab04829f7c\") " pod="kube-system/kube-proxy-jg6z8" Jul 10 00:40:07.832854 kubelet[2714]: I0710 00:40:07.832703 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/55940769-9a4b-4b04-9f1f-4eab04829f7c-kube-proxy\") pod \"kube-proxy-jg6z8\" (UID: \"55940769-9a4b-4b04-9f1f-4eab04829f7c\") " pod="kube-system/kube-proxy-jg6z8" Jul 10 00:40:07.832854 kubelet[2714]: I0710 00:40:07.832723 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55940769-9a4b-4b04-9f1f-4eab04829f7c-lib-modules\") pod \"kube-proxy-jg6z8\" (UID: \"55940769-9a4b-4b04-9f1f-4eab04829f7c\") " pod="kube-system/kube-proxy-jg6z8" Jul 10 00:40:07.832854 kubelet[2714]: I0710 00:40:07.832738 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f54br\" (UniqueName: \"kubernetes.io/projected/55940769-9a4b-4b04-9f1f-4eab04829f7c-kube-api-access-f54br\") pod \"kube-proxy-jg6z8\" (UID: \"55940769-9a4b-4b04-9f1f-4eab04829f7c\") " pod="kube-system/kube-proxy-jg6z8" Jul 10 00:40:07.937864 kubelet[2714]: E0710 00:40:07.937822 2714 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 10 00:40:07.937864 kubelet[2714]: E0710 00:40:07.937847 2714 projected.go:194] Error preparing data for projected volume kube-api-access-f54br for pod kube-system/kube-proxy-jg6z8: configmap "kube-root-ca.crt" not found Jul 10 00:40:07.938017 kubelet[2714]: E0710 00:40:07.937897 2714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55940769-9a4b-4b04-9f1f-4eab04829f7c-kube-api-access-f54br podName:55940769-9a4b-4b04-9f1f-4eab04829f7c nodeName:}" failed. No retries permitted until 2025-07-10 00:40:08.437881388 +0000 UTC m=+6.109890936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f54br" (UniqueName: "kubernetes.io/projected/55940769-9a4b-4b04-9f1f-4eab04829f7c-kube-api-access-f54br") pod "kube-proxy-jg6z8" (UID: "55940769-9a4b-4b04-9f1f-4eab04829f7c") : configmap "kube-root-ca.crt" not found Jul 10 00:40:08.450564 systemd[1]: Created slice kubepods-besteffort-poda7fca4dd_8cd0_4dcb_bd82_c691e7e54920.slice - libcontainer container kubepods-besteffort-poda7fca4dd_8cd0_4dcb_bd82_c691e7e54920.slice. Jul 10 00:40:08.536928 kubelet[2714]: I0710 00:40:08.536871 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a7fca4dd-8cd0-4dcb-bd82-c691e7e54920-var-lib-calico\") pod \"tigera-operator-747864d56d-hhtjk\" (UID: \"a7fca4dd-8cd0-4dcb-bd82-c691e7e54920\") " pod="tigera-operator/tigera-operator-747864d56d-hhtjk" Jul 10 00:40:08.536928 kubelet[2714]: I0710 00:40:08.536903 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhl9p\" (UniqueName: \"kubernetes.io/projected/a7fca4dd-8cd0-4dcb-bd82-c691e7e54920-kube-api-access-dhl9p\") pod \"tigera-operator-747864d56d-hhtjk\" (UID: \"a7fca4dd-8cd0-4dcb-bd82-c691e7e54920\") " pod="tigera-operator/tigera-operator-747864d56d-hhtjk" Jul 10 00:40:08.696098 kubelet[2714]: E0710 00:40:08.696066 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:08.696983 containerd[1580]: time="2025-07-10T00:40:08.696938746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jg6z8,Uid:55940769-9a4b-4b04-9f1f-4eab04829f7c,Namespace:kube-system,Attempt:0,}" Jul 10 00:40:08.713689 containerd[1580]: time="2025-07-10T00:40:08.713336569Z" level=info msg="connecting to shim 6b83accf64ea28a088ef11449b68bae6e2f75e2209844842b445348d7cd585e5" address="unix:///run/containerd/s/4688e6451f2d1e239962dc209b680d65421a7d43aa99dc003a7fab0576c256b4" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:40:08.738764 systemd[1]: Started cri-containerd-6b83accf64ea28a088ef11449b68bae6e2f75e2209844842b445348d7cd585e5.scope - libcontainer container 6b83accf64ea28a088ef11449b68bae6e2f75e2209844842b445348d7cd585e5. Jul 10 00:40:08.754426 containerd[1580]: time="2025-07-10T00:40:08.754398178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-hhtjk,Uid:a7fca4dd-8cd0-4dcb-bd82-c691e7e54920,Namespace:tigera-operator,Attempt:0,}" Jul 10 00:40:08.772748 containerd[1580]: time="2025-07-10T00:40:08.772710597Z" level=info msg="connecting to shim 30b809a1444fee6f1f011ae0023d7b82071cd9a35bdf9d11d391b4dba3ca2a45" address="unix:///run/containerd/s/a8894418ca0597a30e29d4b6c99e0e2f12f7cf3eaf1ef4dfd487ebf98104ac2e" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:40:08.779364 containerd[1580]: time="2025-07-10T00:40:08.779209006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jg6z8,Uid:55940769-9a4b-4b04-9f1f-4eab04829f7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b83accf64ea28a088ef11449b68bae6e2f75e2209844842b445348d7cd585e5\"" Jul 10 00:40:08.780305 kubelet[2714]: E0710 00:40:08.780268 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:08.786580 containerd[1580]: time="2025-07-10T00:40:08.786545971Z" level=info msg="CreateContainer within sandbox \"6b83accf64ea28a088ef11449b68bae6e2f75e2209844842b445348d7cd585e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:40:08.801817 containerd[1580]: time="2025-07-10T00:40:08.801689266Z" level=info msg="Container 208cb939d7f6244e8a283e86182d77c4ad85e921639b496eba5fcfb84dfbc130: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:08.807878 systemd[1]: Started cri-containerd-30b809a1444fee6f1f011ae0023d7b82071cd9a35bdf9d11d391b4dba3ca2a45.scope - libcontainer container 30b809a1444fee6f1f011ae0023d7b82071cd9a35bdf9d11d391b4dba3ca2a45. Jul 10 00:40:08.811389 containerd[1580]: time="2025-07-10T00:40:08.811350778Z" level=info msg="CreateContainer within sandbox \"6b83accf64ea28a088ef11449b68bae6e2f75e2209844842b445348d7cd585e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"208cb939d7f6244e8a283e86182d77c4ad85e921639b496eba5fcfb84dfbc130\"" Jul 10 00:40:08.812223 containerd[1580]: time="2025-07-10T00:40:08.812061413Z" level=info msg="StartContainer for \"208cb939d7f6244e8a283e86182d77c4ad85e921639b496eba5fcfb84dfbc130\"" Jul 10 00:40:08.815074 containerd[1580]: time="2025-07-10T00:40:08.815003216Z" level=info msg="connecting to shim 208cb939d7f6244e8a283e86182d77c4ad85e921639b496eba5fcfb84dfbc130" address="unix:///run/containerd/s/4688e6451f2d1e239962dc209b680d65421a7d43aa99dc003a7fab0576c256b4" protocol=ttrpc version=3 Jul 10 00:40:08.840736 systemd[1]: Started cri-containerd-208cb939d7f6244e8a283e86182d77c4ad85e921639b496eba5fcfb84dfbc130.scope - libcontainer container 208cb939d7f6244e8a283e86182d77c4ad85e921639b496eba5fcfb84dfbc130. Jul 10 00:40:08.874339 containerd[1580]: time="2025-07-10T00:40:08.874225522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-hhtjk,Uid:a7fca4dd-8cd0-4dcb-bd82-c691e7e54920,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"30b809a1444fee6f1f011ae0023d7b82071cd9a35bdf9d11d391b4dba3ca2a45\"" Jul 10 00:40:08.876457 containerd[1580]: time="2025-07-10T00:40:08.876402338Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 00:40:08.902828 containerd[1580]: time="2025-07-10T00:40:08.902785837Z" level=info msg="StartContainer for \"208cb939d7f6244e8a283e86182d77c4ad85e921639b496eba5fcfb84dfbc130\" returns successfully" Jul 10 00:40:09.447557 kubelet[2714]: E0710 00:40:09.447530 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:09.882924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1581838545.mount: Deactivated successfully. Jul 10 00:40:10.328411 containerd[1580]: time="2025-07-10T00:40:10.328367442Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:10.329185 containerd[1580]: time="2025-07-10T00:40:10.328989307Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 10 00:40:10.329641 containerd[1580]: time="2025-07-10T00:40:10.329591080Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:10.331307 containerd[1580]: time="2025-07-10T00:40:10.331280532Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:10.331700 containerd[1580]: time="2025-07-10T00:40:10.331667174Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.455142355s" Jul 10 00:40:10.331741 containerd[1580]: time="2025-07-10T00:40:10.331698784Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 10 00:40:10.335694 containerd[1580]: time="2025-07-10T00:40:10.335671241Z" level=info msg="CreateContainer within sandbox \"30b809a1444fee6f1f011ae0023d7b82071cd9a35bdf9d11d391b4dba3ca2a45\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 00:40:10.342821 containerd[1580]: time="2025-07-10T00:40:10.342790439Z" level=info msg="Container 9d2a4aba8f0031e7e663d9b4f03cf0fa56406a73c9bb97b6aba07b90a72350f6: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:10.350067 containerd[1580]: time="2025-07-10T00:40:10.350031289Z" level=info msg="CreateContainer within sandbox \"30b809a1444fee6f1f011ae0023d7b82071cd9a35bdf9d11d391b4dba3ca2a45\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9d2a4aba8f0031e7e663d9b4f03cf0fa56406a73c9bb97b6aba07b90a72350f6\"" Jul 10 00:40:10.350545 containerd[1580]: time="2025-07-10T00:40:10.350465721Z" level=info msg="StartContainer for \"9d2a4aba8f0031e7e663d9b4f03cf0fa56406a73c9bb97b6aba07b90a72350f6\"" Jul 10 00:40:10.352401 containerd[1580]: time="2025-07-10T00:40:10.352369944Z" level=info msg="connecting to shim 9d2a4aba8f0031e7e663d9b4f03cf0fa56406a73c9bb97b6aba07b90a72350f6" address="unix:///run/containerd/s/a8894418ca0597a30e29d4b6c99e0e2f12f7cf3eaf1ef4dfd487ebf98104ac2e" protocol=ttrpc version=3 Jul 10 00:40:10.374729 systemd[1]: Started cri-containerd-9d2a4aba8f0031e7e663d9b4f03cf0fa56406a73c9bb97b6aba07b90a72350f6.scope - libcontainer container 9d2a4aba8f0031e7e663d9b4f03cf0fa56406a73c9bb97b6aba07b90a72350f6. Jul 10 00:40:10.400257 containerd[1580]: time="2025-07-10T00:40:10.400232177Z" level=info msg="StartContainer for \"9d2a4aba8f0031e7e663d9b4f03cf0fa56406a73c9bb97b6aba07b90a72350f6\" returns successfully" Jul 10 00:40:10.457513 kubelet[2714]: I0710 00:40:10.457470 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jg6z8" podStartSLOduration=3.457454763 podStartE2EDuration="3.457454763s" podCreationTimestamp="2025-07-10 00:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:40:09.455592957 +0000 UTC m=+7.127602505" watchObservedRunningTime="2025-07-10 00:40:10.457454763 +0000 UTC m=+8.129464311" Jul 10 00:40:10.458146 kubelet[2714]: I0710 00:40:10.457542 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-hhtjk" podStartSLOduration=1.000785716 podStartE2EDuration="2.457538733s" podCreationTimestamp="2025-07-10 00:40:08 +0000 UTC" firstStartedPulling="2025-07-10 00:40:08.875678963 +0000 UTC m=+6.547688511" lastFinishedPulling="2025-07-10 00:40:10.33243198 +0000 UTC m=+8.004441528" observedRunningTime="2025-07-10 00:40:10.457370802 +0000 UTC m=+8.129380350" watchObservedRunningTime="2025-07-10 00:40:10.457538733 +0000 UTC m=+8.129548281" Jul 10 00:40:13.033810 kubelet[2714]: E0710 00:40:13.033769 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:13.457207 kubelet[2714]: E0710 00:40:13.457074 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:13.542914 kubelet[2714]: E0710 00:40:13.542865 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:14.572793 kubelet[2714]: E0710 00:40:14.572760 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:15.646831 sudo[1800]: pam_unix(sudo:session): session closed for user root Jul 10 00:40:15.700198 sshd[1799]: Connection closed by 139.178.89.65 port 59524 Jul 10 00:40:15.700909 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Jul 10 00:40:15.706528 systemd[1]: sshd@6-172.238.161.214:22-139.178.89.65:59524.service: Deactivated successfully. Jul 10 00:40:15.710239 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:40:15.711580 systemd[1]: session-7.scope: Consumed 3.839s CPU time, 233.4M memory peak. Jul 10 00:40:15.716476 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:40:15.718614 systemd-logind[1548]: Removed session 7. Jul 10 00:40:18.084775 systemd[1]: Created slice kubepods-besteffort-podd3dc037d_fb2c_4f3b_943c_f90b702b79f0.slice - libcontainer container kubepods-besteffort-podd3dc037d_fb2c_4f3b_943c_f90b702b79f0.slice. Jul 10 00:40:18.099201 kubelet[2714]: I0710 00:40:18.099163 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d3dc037d-fb2c-4f3b-943c-f90b702b79f0-typha-certs\") pod \"calico-typha-7b5fdfd775-s54tz\" (UID: \"d3dc037d-fb2c-4f3b-943c-f90b702b79f0\") " pod="calico-system/calico-typha-7b5fdfd775-s54tz" Jul 10 00:40:18.099512 kubelet[2714]: I0710 00:40:18.099252 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3dc037d-fb2c-4f3b-943c-f90b702b79f0-tigera-ca-bundle\") pod \"calico-typha-7b5fdfd775-s54tz\" (UID: \"d3dc037d-fb2c-4f3b-943c-f90b702b79f0\") " pod="calico-system/calico-typha-7b5fdfd775-s54tz" Jul 10 00:40:18.099512 kubelet[2714]: I0710 00:40:18.099299 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrz42\" (UniqueName: \"kubernetes.io/projected/d3dc037d-fb2c-4f3b-943c-f90b702b79f0-kube-api-access-lrz42\") pod \"calico-typha-7b5fdfd775-s54tz\" (UID: \"d3dc037d-fb2c-4f3b-943c-f90b702b79f0\") " pod="calico-system/calico-typha-7b5fdfd775-s54tz" Jul 10 00:40:18.240657 update_engine[1554]: I20250710 00:40:18.239661 1554 update_attempter.cc:509] Updating boot flags... Jul 10 00:40:18.364535 systemd[1]: Created slice kubepods-besteffort-podb3614fb4_637d_4618_951f_6a9851feef93.slice - libcontainer container kubepods-besteffort-podb3614fb4_637d_4618_951f_6a9851feef93.slice. Jul 10 00:40:18.389282 kubelet[2714]: E0710 00:40:18.389246 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:18.390670 containerd[1580]: time="2025-07-10T00:40:18.390497118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b5fdfd775-s54tz,Uid:d3dc037d-fb2c-4f3b-943c-f90b702b79f0,Namespace:calico-system,Attempt:0,}" Jul 10 00:40:18.402779 kubelet[2714]: I0710 00:40:18.401900 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b3614fb4-637d-4618-951f-6a9851feef93-cni-net-dir\") pod \"calico-node-6g65w\" (UID: \"b3614fb4-637d-4618-951f-6a9851feef93\") " pod="calico-system/calico-node-6g65w" Jul 10 00:40:18.403599 kubelet[2714]: I0710 00:40:18.402984 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b3614fb4-637d-4618-951f-6a9851feef93-policysync\") pod \"calico-node-6g65w\" (UID: \"b3614fb4-637d-4618-951f-6a9851feef93\") " pod="calico-system/calico-node-6g65w" Jul 10 00:40:18.405611 kubelet[2714]: I0710 00:40:18.404532 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p97kf\" (UniqueName: \"kubernetes.io/projected/b3614fb4-637d-4618-951f-6a9851feef93-kube-api-access-p97kf\") pod \"calico-node-6g65w\" (UID: \"b3614fb4-637d-4618-951f-6a9851feef93\") " pod="calico-system/calico-node-6g65w" Jul 10 00:40:18.406304 kubelet[2714]: I0710 00:40:18.406165 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b3614fb4-637d-4618-951f-6a9851feef93-cni-bin-dir\") pod \"calico-node-6g65w\" (UID: \"b3614fb4-637d-4618-951f-6a9851feef93\") " pod="calico-system/calico-node-6g65w" Jul 10 00:40:18.406304 kubelet[2714]: I0710 00:40:18.406195 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3614fb4-637d-4618-951f-6a9851feef93-xtables-lock\") pod \"calico-node-6g65w\" (UID: \"b3614fb4-637d-4618-951f-6a9851feef93\") " pod="calico-system/calico-node-6g65w" Jul 10 00:40:18.406866 kubelet[2714]: I0710 00:40:18.406214 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b3614fb4-637d-4618-951f-6a9851feef93-node-certs\") pod \"calico-node-6g65w\" (UID: \"b3614fb4-637d-4618-951f-6a9851feef93\") " pod="calico-system/calico-node-6g65w" Jul 10 00:40:18.407716 kubelet[2714]: I0710 00:40:18.407011 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3614fb4-637d-4618-951f-6a9851feef93-tigera-ca-bundle\") pod \"calico-node-6g65w\" (UID: \"b3614fb4-637d-4618-951f-6a9851feef93\") " pod="calico-system/calico-node-6g65w" Jul 10 00:40:18.407716 kubelet[2714]: I0710 00:40:18.407035 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b3614fb4-637d-4618-951f-6a9851feef93-var-run-calico\") pod \"calico-node-6g65w\" (UID: \"b3614fb4-637d-4618-951f-6a9851feef93\") " pod="calico-system/calico-node-6g65w" Jul 10 00:40:18.407716 kubelet[2714]: I0710 00:40:18.407054 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b3614fb4-637d-4618-951f-6a9851feef93-flexvol-driver-host\") pod \"calico-node-6g65w\" (UID: \"b3614fb4-637d-4618-951f-6a9851feef93\") " pod="calico-system/calico-node-6g65w" Jul 10 00:40:18.407716 kubelet[2714]: I0710 00:40:18.407066 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3614fb4-637d-4618-951f-6a9851feef93-lib-modules\") pod \"calico-node-6g65w\" (UID: \"b3614fb4-637d-4618-951f-6a9851feef93\") " pod="calico-system/calico-node-6g65w" Jul 10 00:40:18.409120 kubelet[2714]: I0710 00:40:18.407079 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b3614fb4-637d-4618-951f-6a9851feef93-cni-log-dir\") pod \"calico-node-6g65w\" (UID: \"b3614fb4-637d-4618-951f-6a9851feef93\") " pod="calico-system/calico-node-6g65w" Jul 10 00:40:18.409120 kubelet[2714]: I0710 00:40:18.408774 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b3614fb4-637d-4618-951f-6a9851feef93-var-lib-calico\") pod \"calico-node-6g65w\" (UID: \"b3614fb4-637d-4618-951f-6a9851feef93\") " pod="calico-system/calico-node-6g65w" Jul 10 00:40:18.432225 containerd[1580]: time="2025-07-10T00:40:18.432177483Z" level=info msg="connecting to shim d268335eb17997fef4aa1321ec313e7156729487c87c8b359e341bdeecfa30ca" address="unix:///run/containerd/s/26c398557eb5ac9eb2ead5fc5121fb5c2af5687899bc3983b4d010bad31fa59e" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:40:18.516755 kubelet[2714]: E0710 00:40:18.516699 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.516755 kubelet[2714]: W0710 00:40:18.516724 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.516755 kubelet[2714]: E0710 00:40:18.516743 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.516926 kubelet[2714]: E0710 00:40:18.516906 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.516926 kubelet[2714]: W0710 00:40:18.516920 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.516961 kubelet[2714]: E0710 00:40:18.516927 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.517062 kubelet[2714]: E0710 00:40:18.517043 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.517062 kubelet[2714]: W0710 00:40:18.517055 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.517104 kubelet[2714]: E0710 00:40:18.517063 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.525085 kubelet[2714]: E0710 00:40:18.524122 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.525085 kubelet[2714]: W0710 00:40:18.524142 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.525085 kubelet[2714]: E0710 00:40:18.524153 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.528433 kubelet[2714]: E0710 00:40:18.528400 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.528484 kubelet[2714]: W0710 00:40:18.528445 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.528484 kubelet[2714]: E0710 00:40:18.528459 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.538421 kubelet[2714]: E0710 00:40:18.538310 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.538453 kubelet[2714]: W0710 00:40:18.538427 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.538453 kubelet[2714]: E0710 00:40:18.538440 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.540735 kubelet[2714]: E0710 00:40:18.540715 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.540735 kubelet[2714]: W0710 00:40:18.540729 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.540735 kubelet[2714]: E0710 00:40:18.540738 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.543093 kubelet[2714]: E0710 00:40:18.543075 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.543093 kubelet[2714]: W0710 00:40:18.543089 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.543171 kubelet[2714]: E0710 00:40:18.543098 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.543316 kubelet[2714]: E0710 00:40:18.543239 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.543316 kubelet[2714]: W0710 00:40:18.543250 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.543316 kubelet[2714]: E0710 00:40:18.543256 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.543450 kubelet[2714]: E0710 00:40:18.543432 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.543450 kubelet[2714]: W0710 00:40:18.543445 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.543490 kubelet[2714]: E0710 00:40:18.543453 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.544536 kubelet[2714]: E0710 00:40:18.544216 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.544536 kubelet[2714]: W0710 00:40:18.544261 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.544536 kubelet[2714]: E0710 00:40:18.544269 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.544536 kubelet[2714]: E0710 00:40:18.544468 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.544536 kubelet[2714]: W0710 00:40:18.544503 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.544536 kubelet[2714]: E0710 00:40:18.544511 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.545149 kubelet[2714]: E0710 00:40:18.544966 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.545149 kubelet[2714]: W0710 00:40:18.544972 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.545149 kubelet[2714]: E0710 00:40:18.544979 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.545941 kubelet[2714]: E0710 00:40:18.545609 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.545941 kubelet[2714]: W0710 00:40:18.545687 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.545941 kubelet[2714]: E0710 00:40:18.545697 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.548764 kubelet[2714]: E0710 00:40:18.547360 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.548764 kubelet[2714]: W0710 00:40:18.547490 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.548764 kubelet[2714]: E0710 00:40:18.547500 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.570803 systemd[1]: Started cri-containerd-d268335eb17997fef4aa1321ec313e7156729487c87c8b359e341bdeecfa30ca.scope - libcontainer container d268335eb17997fef4aa1321ec313e7156729487c87c8b359e341bdeecfa30ca. Jul 10 00:40:18.573814 kubelet[2714]: E0710 00:40:18.573604 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.574421 kubelet[2714]: W0710 00:40:18.573618 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.574421 kubelet[2714]: E0710 00:40:18.573959 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.674144 containerd[1580]: time="2025-07-10T00:40:18.674073564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6g65w,Uid:b3614fb4-637d-4618-951f-6a9851feef93,Namespace:calico-system,Attempt:0,}" Jul 10 00:40:18.684717 kubelet[2714]: E0710 00:40:18.683942 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbsg4" podUID="d82186e1-710f-473a-b991-55b42ccd0785" Jul 10 00:40:18.700357 kubelet[2714]: E0710 00:40:18.700324 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.700357 kubelet[2714]: W0710 00:40:18.700349 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.700681 kubelet[2714]: E0710 00:40:18.700654 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.701117 kubelet[2714]: E0710 00:40:18.701076 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.701427 kubelet[2714]: W0710 00:40:18.701342 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.701427 kubelet[2714]: E0710 00:40:18.701365 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.701836 kubelet[2714]: E0710 00:40:18.701813 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.701836 kubelet[2714]: W0710 00:40:18.701830 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.702178 kubelet[2714]: E0710 00:40:18.701840 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.702848 kubelet[2714]: E0710 00:40:18.702823 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.702848 kubelet[2714]: W0710 00:40:18.702839 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.702848 kubelet[2714]: E0710 00:40:18.702849 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.703093 kubelet[2714]: E0710 00:40:18.703077 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.703093 kubelet[2714]: W0710 00:40:18.703089 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.703149 kubelet[2714]: E0710 00:40:18.703098 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.703768 kubelet[2714]: E0710 00:40:18.703650 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.703768 kubelet[2714]: W0710 00:40:18.703664 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.703837 kubelet[2714]: E0710 00:40:18.703778 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.704761 kubelet[2714]: E0710 00:40:18.704741 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.704967 kubelet[2714]: W0710 00:40:18.704838 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.704967 kubelet[2714]: E0710 00:40:18.704861 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.705888 containerd[1580]: time="2025-07-10T00:40:18.705770244Z" level=info msg="connecting to shim 5d9550b76c846a6073cd6b17e66556d0e3d039a9f763acdccec25b03fe80fe0f" address="unix:///run/containerd/s/b31ad66a68a1073cd5d9369ef1b1dbfb7b8c98091af276f0b7ece53fd97dd708" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:40:18.705945 kubelet[2714]: E0710 00:40:18.705829 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.705945 kubelet[2714]: W0710 00:40:18.705837 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.705945 kubelet[2714]: E0710 00:40:18.705846 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.706748 kubelet[2714]: E0710 00:40:18.706725 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.706974 kubelet[2714]: W0710 00:40:18.706804 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.706974 kubelet[2714]: E0710 00:40:18.706906 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.707329 kubelet[2714]: E0710 00:40:18.707215 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.707576 kubelet[2714]: W0710 00:40:18.707386 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.707576 kubelet[2714]: E0710 00:40:18.707400 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.708139 kubelet[2714]: E0710 00:40:18.708115 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.708139 kubelet[2714]: W0710 00:40:18.708134 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.708307 kubelet[2714]: E0710 00:40:18.708145 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.709025 kubelet[2714]: E0710 00:40:18.708996 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.709369 kubelet[2714]: W0710 00:40:18.709279 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.709369 kubelet[2714]: E0710 00:40:18.709297 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.710422 kubelet[2714]: E0710 00:40:18.710399 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.710422 kubelet[2714]: W0710 00:40:18.710416 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.710422 kubelet[2714]: E0710 00:40:18.710425 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.711383 kubelet[2714]: E0710 00:40:18.710939 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.711383 kubelet[2714]: W0710 00:40:18.711097 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.711383 kubelet[2714]: E0710 00:40:18.711108 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.711893 kubelet[2714]: E0710 00:40:18.711858 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.712019 kubelet[2714]: W0710 00:40:18.711995 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.712019 kubelet[2714]: E0710 00:40:18.712015 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.712776 kubelet[2714]: E0710 00:40:18.712750 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.712776 kubelet[2714]: W0710 00:40:18.712768 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.712776 kubelet[2714]: E0710 00:40:18.712778 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.713443 kubelet[2714]: E0710 00:40:18.713420 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.713443 kubelet[2714]: W0710 00:40:18.713436 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.713519 kubelet[2714]: E0710 00:40:18.713445 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.714504 kubelet[2714]: E0710 00:40:18.714065 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.714504 kubelet[2714]: W0710 00:40:18.714077 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.714504 kubelet[2714]: E0710 00:40:18.714086 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.714758 kubelet[2714]: E0710 00:40:18.714734 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.714758 kubelet[2714]: W0710 00:40:18.714751 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.714758 kubelet[2714]: E0710 00:40:18.714760 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.715138 kubelet[2714]: E0710 00:40:18.715115 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.715138 kubelet[2714]: W0710 00:40:18.715131 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.715138 kubelet[2714]: E0710 00:40:18.715139 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.715770 kubelet[2714]: E0710 00:40:18.715749 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.715770 kubelet[2714]: W0710 00:40:18.715763 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.715770 kubelet[2714]: E0710 00:40:18.715770 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.716725 kubelet[2714]: I0710 00:40:18.715794 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw5wt\" (UniqueName: \"kubernetes.io/projected/d82186e1-710f-473a-b991-55b42ccd0785-kube-api-access-xw5wt\") pod \"csi-node-driver-nbsg4\" (UID: \"d82186e1-710f-473a-b991-55b42ccd0785\") " pod="calico-system/csi-node-driver-nbsg4" Jul 10 00:40:18.716725 kubelet[2714]: E0710 00:40:18.716540 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.716725 kubelet[2714]: W0710 00:40:18.716548 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.716725 kubelet[2714]: E0710 00:40:18.716556 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.716725 kubelet[2714]: I0710 00:40:18.716574 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d82186e1-710f-473a-b991-55b42ccd0785-varrun\") pod \"csi-node-driver-nbsg4\" (UID: \"d82186e1-710f-473a-b991-55b42ccd0785\") " pod="calico-system/csi-node-driver-nbsg4" Jul 10 00:40:18.716988 kubelet[2714]: E0710 00:40:18.716899 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.716988 kubelet[2714]: W0710 00:40:18.716912 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.716988 kubelet[2714]: E0710 00:40:18.716922 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.716988 kubelet[2714]: I0710 00:40:18.716954 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d82186e1-710f-473a-b991-55b42ccd0785-registration-dir\") pod \"csi-node-driver-nbsg4\" (UID: \"d82186e1-710f-473a-b991-55b42ccd0785\") " pod="calico-system/csi-node-driver-nbsg4" Jul 10 00:40:18.717479 kubelet[2714]: E0710 00:40:18.717371 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.717479 kubelet[2714]: W0710 00:40:18.717387 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.717479 kubelet[2714]: E0710 00:40:18.717396 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.718068 kubelet[2714]: E0710 00:40:18.718045 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.718068 kubelet[2714]: W0710 00:40:18.718060 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.718068 kubelet[2714]: E0710 00:40:18.718068 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.718827 kubelet[2714]: E0710 00:40:18.718802 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.718827 kubelet[2714]: W0710 00:40:18.718819 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.718827 kubelet[2714]: E0710 00:40:18.718828 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.719136 kubelet[2714]: E0710 00:40:18.719119 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.719136 kubelet[2714]: W0710 00:40:18.719134 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.719187 kubelet[2714]: E0710 00:40:18.719143 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.719435 kubelet[2714]: I0710 00:40:18.719409 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d82186e1-710f-473a-b991-55b42ccd0785-socket-dir\") pod \"csi-node-driver-nbsg4\" (UID: \"d82186e1-710f-473a-b991-55b42ccd0785\") " pod="calico-system/csi-node-driver-nbsg4" Jul 10 00:40:18.719899 kubelet[2714]: E0710 00:40:18.719854 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.719899 kubelet[2714]: W0710 00:40:18.719870 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.719899 kubelet[2714]: E0710 00:40:18.719879 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.720805 kubelet[2714]: E0710 00:40:18.720406 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.720805 kubelet[2714]: W0710 00:40:18.720421 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.720805 kubelet[2714]: E0710 00:40:18.720433 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.721298 kubelet[2714]: E0710 00:40:18.721288 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.721426 kubelet[2714]: W0710 00:40:18.721415 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.721525 kubelet[2714]: E0710 00:40:18.721514 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.722054 kubelet[2714]: E0710 00:40:18.722044 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.722206 kubelet[2714]: W0710 00:40:18.722196 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.722261 kubelet[2714]: E0710 00:40:18.722251 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.723037 kubelet[2714]: E0710 00:40:18.722838 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.723037 kubelet[2714]: W0710 00:40:18.722847 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.723037 kubelet[2714]: E0710 00:40:18.722856 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.723486 kubelet[2714]: E0710 00:40:18.723407 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.723486 kubelet[2714]: W0710 00:40:18.723417 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.723486 kubelet[2714]: E0710 00:40:18.723427 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.723859 kubelet[2714]: I0710 00:40:18.723664 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d82186e1-710f-473a-b991-55b42ccd0785-kubelet-dir\") pod \"csi-node-driver-nbsg4\" (UID: \"d82186e1-710f-473a-b991-55b42ccd0785\") " pod="calico-system/csi-node-driver-nbsg4" Jul 10 00:40:18.724221 kubelet[2714]: E0710 00:40:18.724195 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.724221 kubelet[2714]: W0710 00:40:18.724216 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.724276 kubelet[2714]: E0710 00:40:18.724227 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.725656 kubelet[2714]: E0710 00:40:18.725282 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.725656 kubelet[2714]: W0710 00:40:18.725294 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.725656 kubelet[2714]: E0710 00:40:18.725303 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.738118 systemd[1]: Started cri-containerd-5d9550b76c846a6073cd6b17e66556d0e3d039a9f763acdccec25b03fe80fe0f.scope - libcontainer container 5d9550b76c846a6073cd6b17e66556d0e3d039a9f763acdccec25b03fe80fe0f. Jul 10 00:40:18.781335 containerd[1580]: time="2025-07-10T00:40:18.781292159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6g65w,Uid:b3614fb4-637d-4618-951f-6a9851feef93,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d9550b76c846a6073cd6b17e66556d0e3d039a9f763acdccec25b03fe80fe0f\"" Jul 10 00:40:18.783755 containerd[1580]: time="2025-07-10T00:40:18.783603289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 00:40:18.798085 containerd[1580]: time="2025-07-10T00:40:18.798028463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b5fdfd775-s54tz,Uid:d3dc037d-fb2c-4f3b-943c-f90b702b79f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"d268335eb17997fef4aa1321ec313e7156729487c87c8b359e341bdeecfa30ca\"" Jul 10 00:40:18.798975 kubelet[2714]: E0710 00:40:18.798958 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:18.825013 kubelet[2714]: E0710 00:40:18.824972 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.826216 kubelet[2714]: W0710 00:40:18.825175 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.826216 kubelet[2714]: E0710 00:40:18.825195 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.826690 kubelet[2714]: E0710 00:40:18.826677 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.826829 kubelet[2714]: W0710 00:40:18.826797 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.826829 kubelet[2714]: E0710 00:40:18.826812 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.827283 kubelet[2714]: E0710 00:40:18.827254 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.827283 kubelet[2714]: W0710 00:40:18.827264 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.827431 kubelet[2714]: E0710 00:40:18.827356 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.827801 kubelet[2714]: E0710 00:40:18.827788 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.829246 kubelet[2714]: W0710 00:40:18.829185 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.829246 kubelet[2714]: E0710 00:40:18.829202 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.829711 kubelet[2714]: E0710 00:40:18.829673 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.829711 kubelet[2714]: W0710 00:40:18.829684 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.829711 kubelet[2714]: E0710 00:40:18.829693 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.830463 kubelet[2714]: E0710 00:40:18.830431 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.830463 kubelet[2714]: W0710 00:40:18.830442 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.830463 kubelet[2714]: E0710 00:40:18.830451 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.830846 kubelet[2714]: E0710 00:40:18.830815 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.830846 kubelet[2714]: W0710 00:40:18.830825 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.830846 kubelet[2714]: E0710 00:40:18.830835 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.831151 kubelet[2714]: E0710 00:40:18.831139 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.831232 kubelet[2714]: W0710 00:40:18.831208 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.831232 kubelet[2714]: E0710 00:40:18.831221 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.831902 kubelet[2714]: E0710 00:40:18.831878 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.832166 kubelet[2714]: W0710 00:40:18.831978 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.832166 kubelet[2714]: E0710 00:40:18.831991 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.833118 kubelet[2714]: E0710 00:40:18.832877 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.833118 kubelet[2714]: W0710 00:40:18.832888 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.833118 kubelet[2714]: E0710 00:40:18.832897 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.833825 kubelet[2714]: E0710 00:40:18.833755 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.834090 kubelet[2714]: W0710 00:40:18.834055 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.834090 kubelet[2714]: E0710 00:40:18.834071 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.834995 kubelet[2714]: E0710 00:40:18.834975 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.835829 kubelet[2714]: W0710 00:40:18.835653 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.835829 kubelet[2714]: E0710 00:40:18.835672 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.836167 kubelet[2714]: E0710 00:40:18.836156 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.836354 kubelet[2714]: W0710 00:40:18.836293 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.836354 kubelet[2714]: E0710 00:40:18.836306 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.836804 kubelet[2714]: E0710 00:40:18.836719 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.836804 kubelet[2714]: W0710 00:40:18.836781 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.836804 kubelet[2714]: E0710 00:40:18.836791 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.837813 kubelet[2714]: E0710 00:40:18.837756 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.838136 kubelet[2714]: W0710 00:40:18.837905 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.838136 kubelet[2714]: E0710 00:40:18.837920 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.839342 kubelet[2714]: E0710 00:40:18.839328 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.839467 kubelet[2714]: W0710 00:40:18.839401 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.839467 kubelet[2714]: E0710 00:40:18.839415 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.839818 kubelet[2714]: E0710 00:40:18.839787 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.839818 kubelet[2714]: W0710 00:40:18.839798 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.839818 kubelet[2714]: E0710 00:40:18.839807 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.840143 kubelet[2714]: E0710 00:40:18.840111 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.840143 kubelet[2714]: W0710 00:40:18.840123 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.840143 kubelet[2714]: E0710 00:40:18.840131 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.840436 kubelet[2714]: E0710 00:40:18.840407 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.840436 kubelet[2714]: W0710 00:40:18.840417 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.840436 kubelet[2714]: E0710 00:40:18.840425 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.840957 kubelet[2714]: E0710 00:40:18.840926 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.840957 kubelet[2714]: W0710 00:40:18.840936 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.840957 kubelet[2714]: E0710 00:40:18.840945 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.841470 kubelet[2714]: E0710 00:40:18.841439 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.841470 kubelet[2714]: W0710 00:40:18.841450 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.841470 kubelet[2714]: E0710 00:40:18.841458 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.842119 kubelet[2714]: E0710 00:40:18.842096 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.842181 kubelet[2714]: W0710 00:40:18.842118 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.842181 kubelet[2714]: E0710 00:40:18.842141 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.842873 kubelet[2714]: E0710 00:40:18.842855 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.842873 kubelet[2714]: W0710 00:40:18.842870 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.842922 kubelet[2714]: E0710 00:40:18.842882 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.843408 kubelet[2714]: E0710 00:40:18.843391 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.843408 kubelet[2714]: W0710 00:40:18.843405 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.843552 kubelet[2714]: E0710 00:40:18.843527 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.844046 kubelet[2714]: E0710 00:40:18.844019 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.844046 kubelet[2714]: W0710 00:40:18.844034 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.844046 kubelet[2714]: E0710 00:40:18.844043 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:18.858344 kubelet[2714]: E0710 00:40:18.858318 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:40:18.858344 kubelet[2714]: W0710 00:40:18.858337 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:40:18.858344 kubelet[2714]: E0710 00:40:18.858347 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:40:19.454481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376397331.mount: Deactivated successfully. Jul 10 00:40:19.525688 containerd[1580]: time="2025-07-10T00:40:19.525644882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:19.526503 containerd[1580]: time="2025-07-10T00:40:19.526375845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Jul 10 00:40:19.527087 containerd[1580]: time="2025-07-10T00:40:19.527038087Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:19.528260 containerd[1580]: time="2025-07-10T00:40:19.528220133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:19.529165 containerd[1580]: time="2025-07-10T00:40:19.529141567Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 745.450168ms" Jul 10 00:40:19.529207 containerd[1580]: time="2025-07-10T00:40:19.529166467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 10 00:40:19.530697 containerd[1580]: time="2025-07-10T00:40:19.530664113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 00:40:19.532673 containerd[1580]: time="2025-07-10T00:40:19.532607811Z" level=info msg="CreateContainer within sandbox \"5d9550b76c846a6073cd6b17e66556d0e3d039a9f763acdccec25b03fe80fe0f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 00:40:19.541659 containerd[1580]: time="2025-07-10T00:40:19.538662696Z" level=info msg="Container 0576c42d153a67752a74255a055aca68788f2ab5d70d3f896737cc5fada694b8: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:19.550160 containerd[1580]: time="2025-07-10T00:40:19.550124605Z" level=info msg="CreateContainer within sandbox \"5d9550b76c846a6073cd6b17e66556d0e3d039a9f763acdccec25b03fe80fe0f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0576c42d153a67752a74255a055aca68788f2ab5d70d3f896737cc5fada694b8\"" Jul 10 00:40:19.550737 containerd[1580]: time="2025-07-10T00:40:19.550704527Z" level=info msg="StartContainer for \"0576c42d153a67752a74255a055aca68788f2ab5d70d3f896737cc5fada694b8\"" Jul 10 00:40:19.551828 containerd[1580]: time="2025-07-10T00:40:19.551804062Z" level=info msg="connecting to shim 0576c42d153a67752a74255a055aca68788f2ab5d70d3f896737cc5fada694b8" address="unix:///run/containerd/s/b31ad66a68a1073cd5d9369ef1b1dbfb7b8c98091af276f0b7ece53fd97dd708" protocol=ttrpc version=3 Jul 10 00:40:19.576744 systemd[1]: Started cri-containerd-0576c42d153a67752a74255a055aca68788f2ab5d70d3f896737cc5fada694b8.scope - libcontainer container 0576c42d153a67752a74255a055aca68788f2ab5d70d3f896737cc5fada694b8. Jul 10 00:40:19.618447 containerd[1580]: time="2025-07-10T00:40:19.618372613Z" level=info msg="StartContainer for \"0576c42d153a67752a74255a055aca68788f2ab5d70d3f896737cc5fada694b8\" returns successfully" Jul 10 00:40:19.632777 systemd[1]: cri-containerd-0576c42d153a67752a74255a055aca68788f2ab5d70d3f896737cc5fada694b8.scope: Deactivated successfully. Jul 10 00:40:19.634535 containerd[1580]: time="2025-07-10T00:40:19.634500110Z" level=info msg="received exit event container_id:\"0576c42d153a67752a74255a055aca68788f2ab5d70d3f896737cc5fada694b8\" id:\"0576c42d153a67752a74255a055aca68788f2ab5d70d3f896737cc5fada694b8\" pid:3344 exited_at:{seconds:1752108019 nanos:634183819}" Jul 10 00:40:19.634697 containerd[1580]: time="2025-07-10T00:40:19.634613241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0576c42d153a67752a74255a055aca68788f2ab5d70d3f896737cc5fada694b8\" id:\"0576c42d153a67752a74255a055aca68788f2ab5d70d3f896737cc5fada694b8\" pid:3344 exited_at:{seconds:1752108019 nanos:634183819}" Jul 10 00:40:20.414121 kubelet[2714]: E0710 00:40:20.414074 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbsg4" podUID="d82186e1-710f-473a-b991-55b42ccd0785" Jul 10 00:40:20.674383 containerd[1580]: time="2025-07-10T00:40:20.674131575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:20.675371 containerd[1580]: time="2025-07-10T00:40:20.675226079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33740523" Jul 10 00:40:20.675963 containerd[1580]: time="2025-07-10T00:40:20.675931792Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:20.677402 containerd[1580]: time="2025-07-10T00:40:20.677370249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:20.677929 containerd[1580]: time="2025-07-10T00:40:20.677902130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.147100387s" Jul 10 00:40:20.678001 containerd[1580]: time="2025-07-10T00:40:20.677987971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 10 00:40:20.679741 containerd[1580]: time="2025-07-10T00:40:20.679584467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 00:40:20.693522 containerd[1580]: time="2025-07-10T00:40:20.693488393Z" level=info msg="CreateContainer within sandbox \"d268335eb17997fef4aa1321ec313e7156729487c87c8b359e341bdeecfa30ca\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 00:40:20.699800 containerd[1580]: time="2025-07-10T00:40:20.699739748Z" level=info msg="Container f6882167d34ff90bcbc3e5e57c5b93e935483537b315f7ef45b80868caa4bbe0: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:20.707442 containerd[1580]: time="2025-07-10T00:40:20.707408409Z" level=info msg="CreateContainer within sandbox \"d268335eb17997fef4aa1321ec313e7156729487c87c8b359e341bdeecfa30ca\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f6882167d34ff90bcbc3e5e57c5b93e935483537b315f7ef45b80868caa4bbe0\"" Jul 10 00:40:20.709184 containerd[1580]: time="2025-07-10T00:40:20.709128085Z" level=info msg="StartContainer for \"f6882167d34ff90bcbc3e5e57c5b93e935483537b315f7ef45b80868caa4bbe0\"" Jul 10 00:40:20.710263 containerd[1580]: time="2025-07-10T00:40:20.710241780Z" level=info msg="connecting to shim f6882167d34ff90bcbc3e5e57c5b93e935483537b315f7ef45b80868caa4bbe0" address="unix:///run/containerd/s/26c398557eb5ac9eb2ead5fc5121fb5c2af5687899bc3983b4d010bad31fa59e" protocol=ttrpc version=3 Jul 10 00:40:20.734864 systemd[1]: Started cri-containerd-f6882167d34ff90bcbc3e5e57c5b93e935483537b315f7ef45b80868caa4bbe0.scope - libcontainer container f6882167d34ff90bcbc3e5e57c5b93e935483537b315f7ef45b80868caa4bbe0. Jul 10 00:40:20.790890 containerd[1580]: time="2025-07-10T00:40:20.790828753Z" level=info msg="StartContainer for \"f6882167d34ff90bcbc3e5e57c5b93e935483537b315f7ef45b80868caa4bbe0\" returns successfully" Jul 10 00:40:21.503408 kubelet[2714]: E0710 00:40:21.503371 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:21.514682 kubelet[2714]: I0710 00:40:21.514596 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b5fdfd775-s54tz" podStartSLOduration=1.635307675 podStartE2EDuration="3.514522719s" podCreationTimestamp="2025-07-10 00:40:18 +0000 UTC" firstStartedPulling="2025-07-10 00:40:18.79959526 +0000 UTC m=+16.471604808" lastFinishedPulling="2025-07-10 00:40:20.678810304 +0000 UTC m=+18.350819852" observedRunningTime="2025-07-10 00:40:21.514127128 +0000 UTC m=+19.186136696" watchObservedRunningTime="2025-07-10 00:40:21.514522719 +0000 UTC m=+19.186532267" Jul 10 00:40:22.414199 kubelet[2714]: E0710 00:40:22.414167 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbsg4" podUID="d82186e1-710f-473a-b991-55b42ccd0785" Jul 10 00:40:22.505844 kubelet[2714]: I0710 00:40:22.505820 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:40:22.506900 kubelet[2714]: E0710 00:40:22.506885 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:22.800435 containerd[1580]: time="2025-07-10T00:40:22.800394186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:22.801603 containerd[1580]: time="2025-07-10T00:40:22.801422590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 10 00:40:22.802245 containerd[1580]: time="2025-07-10T00:40:22.801918802Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:22.803746 containerd[1580]: time="2025-07-10T00:40:22.803720298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:22.805447 containerd[1580]: time="2025-07-10T00:40:22.805420995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.125767916s" Jul 10 00:40:22.805496 containerd[1580]: time="2025-07-10T00:40:22.805447975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 10 00:40:22.809549 containerd[1580]: time="2025-07-10T00:40:22.809499969Z" level=info msg="CreateContainer within sandbox \"5d9550b76c846a6073cd6b17e66556d0e3d039a9f763acdccec25b03fe80fe0f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 00:40:22.815649 containerd[1580]: time="2025-07-10T00:40:22.814982400Z" level=info msg="Container c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:22.828438 containerd[1580]: time="2025-07-10T00:40:22.828412588Z" level=info msg="CreateContainer within sandbox \"5d9550b76c846a6073cd6b17e66556d0e3d039a9f763acdccec25b03fe80fe0f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7\"" Jul 10 00:40:22.829754 containerd[1580]: time="2025-07-10T00:40:22.829707323Z" level=info msg="StartContainer for \"c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7\"" Jul 10 00:40:22.830908 containerd[1580]: time="2025-07-10T00:40:22.830881248Z" level=info msg="connecting to shim c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7" address="unix:///run/containerd/s/b31ad66a68a1073cd5d9369ef1b1dbfb7b8c98091af276f0b7ece53fd97dd708" protocol=ttrpc version=3 Jul 10 00:40:22.854837 systemd[1]: Started cri-containerd-c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7.scope - libcontainer container c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7. Jul 10 00:40:22.901377 containerd[1580]: time="2025-07-10T00:40:22.901340134Z" level=info msg="StartContainer for \"c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7\" returns successfully" Jul 10 00:40:23.335374 containerd[1580]: time="2025-07-10T00:40:23.335332262Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:40:23.339278 systemd[1]: cri-containerd-c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7.scope: Deactivated successfully. Jul 10 00:40:23.339772 systemd[1]: cri-containerd-c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7.scope: Consumed 469ms CPU time, 198.1M memory peak, 171.2M written to disk. Jul 10 00:40:23.341116 containerd[1580]: time="2025-07-10T00:40:23.341043562Z" level=info msg="received exit event container_id:\"c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7\" id:\"c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7\" pid:3444 exited_at:{seconds:1752108023 nanos:340832381}" Jul 10 00:40:23.341259 containerd[1580]: time="2025-07-10T00:40:23.341233163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7\" id:\"c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7\" pid:3444 exited_at:{seconds:1752108023 nanos:340832381}" Jul 10 00:40:23.364350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c49112532e045f835bfe27edbc93ccfed3016baae687db97f4a939cf9eec0dd7-rootfs.mount: Deactivated successfully. Jul 10 00:40:23.426823 kubelet[2714]: I0710 00:40:23.426680 2714 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:40:23.465082 systemd[1]: Created slice kubepods-besteffort-pod09adf70f_5efc_4b9b_a226_b3a115bd24f5.slice - libcontainer container kubepods-besteffort-pod09adf70f_5efc_4b9b_a226_b3a115bd24f5.slice. Jul 10 00:40:23.486826 systemd[1]: Created slice kubepods-besteffort-podc36c8077_0d0f_4e6b_8dab_82cc1c2f788f.slice - libcontainer container kubepods-besteffort-podc36c8077_0d0f_4e6b_8dab_82cc1c2f788f.slice. Jul 10 00:40:23.493762 systemd[1]: Created slice kubepods-burstable-podea45d8e8_7379_4561_8349_64b7edc81181.slice - libcontainer container kubepods-burstable-podea45d8e8_7379_4561_8349_64b7edc81181.slice. Jul 10 00:40:23.503256 systemd[1]: Created slice kubepods-burstable-podc8820429_a497_45f4_97be_845bf5122a43.slice - libcontainer container kubepods-burstable-podc8820429_a497_45f4_97be_845bf5122a43.slice. Jul 10 00:40:23.512081 systemd[1]: Created slice kubepods-besteffort-pod5ca031b0_0a99_47a5_a801_c80e357a48f6.slice - libcontainer container kubepods-besteffort-pod5ca031b0_0a99_47a5_a801_c80e357a48f6.slice. Jul 10 00:40:23.520576 systemd[1]: Created slice kubepods-besteffort-poda3302309_0908_4122_b93a_2aa48230a086.slice - libcontainer container kubepods-besteffort-poda3302309_0908_4122_b93a_2aa48230a086.slice. Jul 10 00:40:23.527532 containerd[1580]: time="2025-07-10T00:40:23.527424761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 00:40:23.530167 systemd[1]: Created slice kubepods-besteffort-poddb18eeab_67e9_4648_b07d_c6930950a832.slice - libcontainer container kubepods-besteffort-poddb18eeab_67e9_4648_b07d_c6930950a832.slice. Jul 10 00:40:23.566856 kubelet[2714]: I0710 00:40:23.566828 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4phgc\" (UniqueName: \"kubernetes.io/projected/c8820429-a497-45f4-97be-845bf5122a43-kube-api-access-4phgc\") pod \"coredns-674b8bbfcf-fc97t\" (UID: \"c8820429-a497-45f4-97be-845bf5122a43\") " pod="kube-system/coredns-674b8bbfcf-fc97t" Jul 10 00:40:23.567166 kubelet[2714]: I0710 00:40:23.566859 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/db18eeab-67e9-4648-b07d-c6930950a832-goldmane-key-pair\") pod \"goldmane-768f4c5c69-fjcsh\" (UID: \"db18eeab-67e9-4648-b07d-c6930950a832\") " pod="calico-system/goldmane-768f4c5c69-fjcsh" Jul 10 00:40:23.567166 kubelet[2714]: I0710 00:40:23.566896 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a3302309-0908-4122-b93a-2aa48230a086-calico-apiserver-certs\") pod \"calico-apiserver-84597f7d9f-264zs\" (UID: \"a3302309-0908-4122-b93a-2aa48230a086\") " pod="calico-apiserver/calico-apiserver-84597f7d9f-264zs" Jul 10 00:40:23.567166 kubelet[2714]: I0710 00:40:23.566913 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b652\" (UniqueName: \"kubernetes.io/projected/c36c8077-0d0f-4e6b-8dab-82cc1c2f788f-kube-api-access-5b652\") pod \"calico-apiserver-84597f7d9f-fzrmq\" (UID: \"c36c8077-0d0f-4e6b-8dab-82cc1c2f788f\") " pod="calico-apiserver/calico-apiserver-84597f7d9f-fzrmq" Jul 10 00:40:23.567166 kubelet[2714]: I0710 00:40:23.566927 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db18eeab-67e9-4648-b07d-c6930950a832-config\") pod \"goldmane-768f4c5c69-fjcsh\" (UID: \"db18eeab-67e9-4648-b07d-c6930950a832\") " pod="calico-system/goldmane-768f4c5c69-fjcsh" Jul 10 00:40:23.567166 kubelet[2714]: I0710 00:40:23.566945 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ca031b0-0a99-47a5-a801-c80e357a48f6-whisker-ca-bundle\") pod \"whisker-6d97b46c89-rz8gv\" (UID: \"5ca031b0-0a99-47a5-a801-c80e357a48f6\") " pod="calico-system/whisker-6d97b46c89-rz8gv" Jul 10 00:40:23.567279 kubelet[2714]: I0710 00:40:23.566960 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8820429-a497-45f4-97be-845bf5122a43-config-volume\") pod \"coredns-674b8bbfcf-fc97t\" (UID: \"c8820429-a497-45f4-97be-845bf5122a43\") " pod="kube-system/coredns-674b8bbfcf-fc97t" Jul 10 00:40:23.567279 kubelet[2714]: I0710 00:40:23.566973 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09adf70f-5efc-4b9b-a226-b3a115bd24f5-tigera-ca-bundle\") pod \"calico-kube-controllers-6647c9884-zpnlk\" (UID: \"09adf70f-5efc-4b9b-a226-b3a115bd24f5\") " pod="calico-system/calico-kube-controllers-6647c9884-zpnlk" Jul 10 00:40:23.567279 kubelet[2714]: I0710 00:40:23.566987 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c36c8077-0d0f-4e6b-8dab-82cc1c2f788f-calico-apiserver-certs\") pod \"calico-apiserver-84597f7d9f-fzrmq\" (UID: \"c36c8077-0d0f-4e6b-8dab-82cc1c2f788f\") " pod="calico-apiserver/calico-apiserver-84597f7d9f-fzrmq" Jul 10 00:40:23.567279 kubelet[2714]: I0710 00:40:23.566999 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ca031b0-0a99-47a5-a801-c80e357a48f6-whisker-backend-key-pair\") pod \"whisker-6d97b46c89-rz8gv\" (UID: \"5ca031b0-0a99-47a5-a801-c80e357a48f6\") " pod="calico-system/whisker-6d97b46c89-rz8gv" Jul 10 00:40:23.567279 kubelet[2714]: I0710 00:40:23.567014 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7p4z\" (UniqueName: \"kubernetes.io/projected/09adf70f-5efc-4b9b-a226-b3a115bd24f5-kube-api-access-b7p4z\") pod \"calico-kube-controllers-6647c9884-zpnlk\" (UID: \"09adf70f-5efc-4b9b-a226-b3a115bd24f5\") " pod="calico-system/calico-kube-controllers-6647c9884-zpnlk" Jul 10 00:40:23.567384 kubelet[2714]: I0710 00:40:23.567030 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db18eeab-67e9-4648-b07d-c6930950a832-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-fjcsh\" (UID: \"db18eeab-67e9-4648-b07d-c6930950a832\") " pod="calico-system/goldmane-768f4c5c69-fjcsh" Jul 10 00:40:23.567384 kubelet[2714]: I0710 00:40:23.567052 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2trmx\" (UniqueName: \"kubernetes.io/projected/5ca031b0-0a99-47a5-a801-c80e357a48f6-kube-api-access-2trmx\") pod \"whisker-6d97b46c89-rz8gv\" (UID: \"5ca031b0-0a99-47a5-a801-c80e357a48f6\") " pod="calico-system/whisker-6d97b46c89-rz8gv" Jul 10 00:40:23.567384 kubelet[2714]: I0710 00:40:23.567067 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea45d8e8-7379-4561-8349-64b7edc81181-config-volume\") pod \"coredns-674b8bbfcf-r72x5\" (UID: \"ea45d8e8-7379-4561-8349-64b7edc81181\") " pod="kube-system/coredns-674b8bbfcf-r72x5" Jul 10 00:40:23.567384 kubelet[2714]: I0710 00:40:23.567079 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdf6j\" (UniqueName: \"kubernetes.io/projected/db18eeab-67e9-4648-b07d-c6930950a832-kube-api-access-fdf6j\") pod \"goldmane-768f4c5c69-fjcsh\" (UID: \"db18eeab-67e9-4648-b07d-c6930950a832\") " pod="calico-system/goldmane-768f4c5c69-fjcsh" Jul 10 00:40:23.567384 kubelet[2714]: I0710 00:40:23.567103 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f6d7\" (UniqueName: \"kubernetes.io/projected/ea45d8e8-7379-4561-8349-64b7edc81181-kube-api-access-7f6d7\") pod \"coredns-674b8bbfcf-r72x5\" (UID: \"ea45d8e8-7379-4561-8349-64b7edc81181\") " pod="kube-system/coredns-674b8bbfcf-r72x5" Jul 10 00:40:23.567529 kubelet[2714]: I0710 00:40:23.567117 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zfm6\" (UniqueName: \"kubernetes.io/projected/a3302309-0908-4122-b93a-2aa48230a086-kube-api-access-6zfm6\") pod \"calico-apiserver-84597f7d9f-264zs\" (UID: \"a3302309-0908-4122-b93a-2aa48230a086\") " pod="calico-apiserver/calico-apiserver-84597f7d9f-264zs" Jul 10 00:40:23.779093 containerd[1580]: time="2025-07-10T00:40:23.779035996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6647c9884-zpnlk,Uid:09adf70f-5efc-4b9b-a226-b3a115bd24f5,Namespace:calico-system,Attempt:0,}" Jul 10 00:40:23.790479 containerd[1580]: time="2025-07-10T00:40:23.790305656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84597f7d9f-fzrmq,Uid:c36c8077-0d0f-4e6b-8dab-82cc1c2f788f,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:40:23.798907 kubelet[2714]: E0710 00:40:23.798850 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:23.799739 containerd[1580]: time="2025-07-10T00:40:23.799403438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r72x5,Uid:ea45d8e8-7379-4561-8349-64b7edc81181,Namespace:kube-system,Attempt:0,}" Jul 10 00:40:23.807325 kubelet[2714]: E0710 00:40:23.807310 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:23.809857 containerd[1580]: time="2025-07-10T00:40:23.809820584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fc97t,Uid:c8820429-a497-45f4-97be-845bf5122a43,Namespace:kube-system,Attempt:0,}" Jul 10 00:40:23.820007 containerd[1580]: time="2025-07-10T00:40:23.819298247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d97b46c89-rz8gv,Uid:5ca031b0-0a99-47a5-a801-c80e357a48f6,Namespace:calico-system,Attempt:0,}" Jul 10 00:40:23.828967 containerd[1580]: time="2025-07-10T00:40:23.828728710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84597f7d9f-264zs,Uid:a3302309-0908-4122-b93a-2aa48230a086,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:40:23.837343 containerd[1580]: time="2025-07-10T00:40:23.837278530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fjcsh,Uid:db18eeab-67e9-4648-b07d-c6930950a832,Namespace:calico-system,Attempt:0,}" Jul 10 00:40:23.975379 containerd[1580]: time="2025-07-10T00:40:23.975326501Z" level=error msg="Failed to destroy network for sandbox \"5c7780131f80fd687bdb6092ab4f74fef2feee18844df4f4751d0f7ffef3ebe7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:23.979376 systemd[1]: run-netns-cni\x2d99df8497\x2d49b7\x2defa9\x2d4a9d\x2deaf9f85361f3.mount: Deactivated successfully. Jul 10 00:40:23.983894 containerd[1580]: time="2025-07-10T00:40:23.983857550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6647c9884-zpnlk,Uid:09adf70f-5efc-4b9b-a226-b3a115bd24f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c7780131f80fd687bdb6092ab4f74fef2feee18844df4f4751d0f7ffef3ebe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:23.984194 kubelet[2714]: E0710 00:40:23.984095 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c7780131f80fd687bdb6092ab4f74fef2feee18844df4f4751d0f7ffef3ebe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:23.984194 kubelet[2714]: E0710 00:40:23.984186 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c7780131f80fd687bdb6092ab4f74fef2feee18844df4f4751d0f7ffef3ebe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6647c9884-zpnlk" Jul 10 00:40:23.984550 kubelet[2714]: E0710 00:40:23.984207 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c7780131f80fd687bdb6092ab4f74fef2feee18844df4f4751d0f7ffef3ebe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6647c9884-zpnlk" Jul 10 00:40:23.984550 kubelet[2714]: E0710 00:40:23.984312 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6647c9884-zpnlk_calico-system(09adf70f-5efc-4b9b-a226-b3a115bd24f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6647c9884-zpnlk_calico-system(09adf70f-5efc-4b9b-a226-b3a115bd24f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c7780131f80fd687bdb6092ab4f74fef2feee18844df4f4751d0f7ffef3ebe7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6647c9884-zpnlk" podUID="09adf70f-5efc-4b9b-a226-b3a115bd24f5" Jul 10 00:40:24.030151 containerd[1580]: time="2025-07-10T00:40:24.029693425Z" level=error msg="Failed to destroy network for sandbox \"7ca1e0e866401d4699061e2c76634972a01b5421b54b2307d882c8809397ae8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.032910 containerd[1580]: time="2025-07-10T00:40:24.032882606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r72x5,Uid:ea45d8e8-7379-4561-8349-64b7edc81181,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca1e0e866401d4699061e2c76634972a01b5421b54b2307d882c8809397ae8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.033404 kubelet[2714]: E0710 00:40:24.033313 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca1e0e866401d4699061e2c76634972a01b5421b54b2307d882c8809397ae8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.034011 kubelet[2714]: E0710 00:40:24.033985 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca1e0e866401d4699061e2c76634972a01b5421b54b2307d882c8809397ae8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r72x5" Jul 10 00:40:24.034074 kubelet[2714]: E0710 00:40:24.034014 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca1e0e866401d4699061e2c76634972a01b5421b54b2307d882c8809397ae8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r72x5" Jul 10 00:40:24.034074 kubelet[2714]: E0710 00:40:24.034056 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-r72x5_kube-system(ea45d8e8-7379-4561-8349-64b7edc81181)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-r72x5_kube-system(ea45d8e8-7379-4561-8349-64b7edc81181)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ca1e0e866401d4699061e2c76634972a01b5421b54b2307d882c8809397ae8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-r72x5" podUID="ea45d8e8-7379-4561-8349-64b7edc81181" Jul 10 00:40:24.037653 containerd[1580]: time="2025-07-10T00:40:24.037605621Z" level=error msg="Failed to destroy network for sandbox \"e014896a1fa9b7c9a68275dbba70de6f90a45f934da21dabbf09411910e26546\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.043151 containerd[1580]: time="2025-07-10T00:40:24.043115139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fjcsh,Uid:db18eeab-67e9-4648-b07d-c6930950a832,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e014896a1fa9b7c9a68275dbba70de6f90a45f934da21dabbf09411910e26546\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.043273 kubelet[2714]: E0710 00:40:24.043246 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e014896a1fa9b7c9a68275dbba70de6f90a45f934da21dabbf09411910e26546\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.043306 kubelet[2714]: E0710 00:40:24.043277 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e014896a1fa9b7c9a68275dbba70de6f90a45f934da21dabbf09411910e26546\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-fjcsh" Jul 10 00:40:24.043306 kubelet[2714]: E0710 00:40:24.043293 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e014896a1fa9b7c9a68275dbba70de6f90a45f934da21dabbf09411910e26546\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-fjcsh" Jul 10 00:40:24.043368 kubelet[2714]: E0710 00:40:24.043319 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-fjcsh_calico-system(db18eeab-67e9-4648-b07d-c6930950a832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-fjcsh_calico-system(db18eeab-67e9-4648-b07d-c6930950a832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e014896a1fa9b7c9a68275dbba70de6f90a45f934da21dabbf09411910e26546\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-fjcsh" podUID="db18eeab-67e9-4648-b07d-c6930950a832" Jul 10 00:40:24.048506 containerd[1580]: time="2025-07-10T00:40:24.048415368Z" level=error msg="Failed to destroy network for sandbox \"128723684b5c958fc77136afd9d51b87420a71952c74e745ce9ee568f732a2e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.049382 containerd[1580]: time="2025-07-10T00:40:24.049329521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84597f7d9f-fzrmq,Uid:c36c8077-0d0f-4e6b-8dab-82cc1c2f788f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"128723684b5c958fc77136afd9d51b87420a71952c74e745ce9ee568f732a2e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.049792 kubelet[2714]: E0710 00:40:24.049759 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"128723684b5c958fc77136afd9d51b87420a71952c74e745ce9ee568f732a2e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.050016 kubelet[2714]: E0710 00:40:24.049793 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"128723684b5c958fc77136afd9d51b87420a71952c74e745ce9ee568f732a2e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84597f7d9f-fzrmq" Jul 10 00:40:24.050016 kubelet[2714]: E0710 00:40:24.049833 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"128723684b5c958fc77136afd9d51b87420a71952c74e745ce9ee568f732a2e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84597f7d9f-fzrmq" Jul 10 00:40:24.050016 kubelet[2714]: E0710 00:40:24.049865 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84597f7d9f-fzrmq_calico-apiserver(c36c8077-0d0f-4e6b-8dab-82cc1c2f788f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84597f7d9f-fzrmq_calico-apiserver(c36c8077-0d0f-4e6b-8dab-82cc1c2f788f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"128723684b5c958fc77136afd9d51b87420a71952c74e745ce9ee568f732a2e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84597f7d9f-fzrmq" podUID="c36c8077-0d0f-4e6b-8dab-82cc1c2f788f" Jul 10 00:40:24.086844 containerd[1580]: time="2025-07-10T00:40:24.086815195Z" level=error msg="Failed to destroy network for sandbox \"4ff75a45de57b775132959fb5bbc41485aac24c10eb79d147f1a2165e73caa46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.086947 containerd[1580]: time="2025-07-10T00:40:24.086926395Z" level=error msg="Failed to destroy network for sandbox \"eb39fdef2be6469a57c1c8909141fda771e14f630b303672bb2a462d8aae4403\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.088829 containerd[1580]: time="2025-07-10T00:40:24.088794262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84597f7d9f-264zs,Uid:a3302309-0908-4122-b93a-2aa48230a086,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb39fdef2be6469a57c1c8909141fda771e14f630b303672bb2a462d8aae4403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.089378 kubelet[2714]: E0710 00:40:24.089323 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb39fdef2be6469a57c1c8909141fda771e14f630b303672bb2a462d8aae4403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.089560 containerd[1580]: time="2025-07-10T00:40:24.089456114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fc97t,Uid:c8820429-a497-45f4-97be-845bf5122a43,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ff75a45de57b775132959fb5bbc41485aac24c10eb79d147f1a2165e73caa46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.089661 kubelet[2714]: E0710 00:40:24.089527 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb39fdef2be6469a57c1c8909141fda771e14f630b303672bb2a462d8aae4403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84597f7d9f-264zs" Jul 10 00:40:24.089807 kubelet[2714]: E0710 00:40:24.089722 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb39fdef2be6469a57c1c8909141fda771e14f630b303672bb2a462d8aae4403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84597f7d9f-264zs" Jul 10 00:40:24.090069 kubelet[2714]: E0710 00:40:24.089915 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84597f7d9f-264zs_calico-apiserver(a3302309-0908-4122-b93a-2aa48230a086)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84597f7d9f-264zs_calico-apiserver(a3302309-0908-4122-b93a-2aa48230a086)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb39fdef2be6469a57c1c8909141fda771e14f630b303672bb2a462d8aae4403\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84597f7d9f-264zs" podUID="a3302309-0908-4122-b93a-2aa48230a086" Jul 10 00:40:24.090231 kubelet[2714]: E0710 00:40:24.090114 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ff75a45de57b775132959fb5bbc41485aac24c10eb79d147f1a2165e73caa46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.090231 kubelet[2714]: E0710 00:40:24.090150 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ff75a45de57b775132959fb5bbc41485aac24c10eb79d147f1a2165e73caa46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fc97t" Jul 10 00:40:24.090231 kubelet[2714]: E0710 00:40:24.090168 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ff75a45de57b775132959fb5bbc41485aac24c10eb79d147f1a2165e73caa46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fc97t" Jul 10 00:40:24.090309 kubelet[2714]: E0710 00:40:24.090198 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fc97t_kube-system(c8820429-a497-45f4-97be-845bf5122a43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fc97t_kube-system(c8820429-a497-45f4-97be-845bf5122a43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ff75a45de57b775132959fb5bbc41485aac24c10eb79d147f1a2165e73caa46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fc97t" podUID="c8820429-a497-45f4-97be-845bf5122a43" Jul 10 00:40:24.090350 containerd[1580]: time="2025-07-10T00:40:24.090305077Z" level=error msg="Failed to destroy network for sandbox \"4798f75ec23e98079dd8fa54206758d07c709c14688df111576e519a8aae3682\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.091163 containerd[1580]: time="2025-07-10T00:40:24.091109649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d97b46c89-rz8gv,Uid:5ca031b0-0a99-47a5-a801-c80e357a48f6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4798f75ec23e98079dd8fa54206758d07c709c14688df111576e519a8aae3682\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.091499 kubelet[2714]: E0710 00:40:24.091381 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4798f75ec23e98079dd8fa54206758d07c709c14688df111576e519a8aae3682\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.091499 kubelet[2714]: E0710 00:40:24.091453 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4798f75ec23e98079dd8fa54206758d07c709c14688df111576e519a8aae3682\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d97b46c89-rz8gv" Jul 10 00:40:24.091499 kubelet[2714]: E0710 00:40:24.091469 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4798f75ec23e98079dd8fa54206758d07c709c14688df111576e519a8aae3682\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d97b46c89-rz8gv" Jul 10 00:40:24.091701 kubelet[2714]: E0710 00:40:24.091665 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d97b46c89-rz8gv_calico-system(5ca031b0-0a99-47a5-a801-c80e357a48f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d97b46c89-rz8gv_calico-system(5ca031b0-0a99-47a5-a801-c80e357a48f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4798f75ec23e98079dd8fa54206758d07c709c14688df111576e519a8aae3682\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d97b46c89-rz8gv" podUID="5ca031b0-0a99-47a5-a801-c80e357a48f6" Jul 10 00:40:24.420304 systemd[1]: Created slice kubepods-besteffort-podd82186e1_710f_473a_b991_55b42ccd0785.slice - libcontainer container kubepods-besteffort-podd82186e1_710f_473a_b991_55b42ccd0785.slice. Jul 10 00:40:24.425328 containerd[1580]: time="2025-07-10T00:40:24.425236142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbsg4,Uid:d82186e1-710f-473a-b991-55b42ccd0785,Namespace:calico-system,Attempt:0,}" Jul 10 00:40:24.484379 containerd[1580]: time="2025-07-10T00:40:24.484292208Z" level=error msg="Failed to destroy network for sandbox \"3160d0714540b42807915ed38fb767eb79f5a3b94e354c38eddaedc3c87e1a8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.485310 containerd[1580]: time="2025-07-10T00:40:24.485279001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbsg4,Uid:d82186e1-710f-473a-b991-55b42ccd0785,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3160d0714540b42807915ed38fb767eb79f5a3b94e354c38eddaedc3c87e1a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.485552 kubelet[2714]: E0710 00:40:24.485514 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3160d0714540b42807915ed38fb767eb79f5a3b94e354c38eddaedc3c87e1a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:40:24.485552 kubelet[2714]: E0710 00:40:24.485557 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3160d0714540b42807915ed38fb767eb79f5a3b94e354c38eddaedc3c87e1a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbsg4" Jul 10 00:40:24.485778 kubelet[2714]: E0710 00:40:24.485594 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3160d0714540b42807915ed38fb767eb79f5a3b94e354c38eddaedc3c87e1a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbsg4" Jul 10 00:40:24.485778 kubelet[2714]: E0710 00:40:24.485663 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nbsg4_calico-system(d82186e1-710f-473a-b991-55b42ccd0785)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nbsg4_calico-system(d82186e1-710f-473a-b991-55b42ccd0785)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3160d0714540b42807915ed38fb767eb79f5a3b94e354c38eddaedc3c87e1a8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nbsg4" podUID="d82186e1-710f-473a-b991-55b42ccd0785" Jul 10 00:40:24.817079 systemd[1]: run-netns-cni\x2dded79c50\x2d1de4\x2d53ad\x2d443a\x2d095253166e48.mount: Deactivated successfully. Jul 10 00:40:24.817197 systemd[1]: run-netns-cni\x2dc2f53715\x2d6e3a\x2d4110\x2d04a3\x2db7f06a22f32b.mount: Deactivated successfully. Jul 10 00:40:24.817262 systemd[1]: run-netns-cni\x2dee0bedf4\x2d6afa\x2d4b01\x2d72ff\x2d5a5d4839d65d.mount: Deactivated successfully. Jul 10 00:40:24.817325 systemd[1]: run-netns-cni\x2dccf1b398\x2d1641\x2d1012\x2d0d5d\x2d5be6d912893c.mount: Deactivated successfully. Jul 10 00:40:24.817390 systemd[1]: run-netns-cni\x2d877af90e\x2db363\x2d7945\x2d519b\x2de86204f8f6fc.mount: Deactivated successfully. Jul 10 00:40:24.817457 systemd[1]: run-netns-cni\x2d82b74d69\x2d4ace\x2d432e\x2df297\x2df104130405db.mount: Deactivated successfully. Jul 10 00:40:27.708505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1294345314.mount: Deactivated successfully. Jul 10 00:40:27.734872 containerd[1580]: time="2025-07-10T00:40:27.734819593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:27.735645 containerd[1580]: time="2025-07-10T00:40:27.735394575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 10 00:40:27.736118 containerd[1580]: time="2025-07-10T00:40:27.736089258Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:27.737306 containerd[1580]: time="2025-07-10T00:40:27.737286801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:27.737875 containerd[1580]: time="2025-07-10T00:40:27.737841502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.21009353s" Jul 10 00:40:27.737919 containerd[1580]: time="2025-07-10T00:40:27.737874242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 10 00:40:27.756141 containerd[1580]: time="2025-07-10T00:40:27.756099926Z" level=info msg="CreateContainer within sandbox \"5d9550b76c846a6073cd6b17e66556d0e3d039a9f763acdccec25b03fe80fe0f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 00:40:27.764869 containerd[1580]: time="2025-07-10T00:40:27.764833131Z" level=info msg="Container 9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:27.770774 containerd[1580]: time="2025-07-10T00:40:27.770738958Z" level=info msg="CreateContainer within sandbox \"5d9550b76c846a6073cd6b17e66556d0e3d039a9f763acdccec25b03fe80fe0f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357\"" Jul 10 00:40:27.771306 containerd[1580]: time="2025-07-10T00:40:27.771276350Z" level=info msg="StartContainer for \"9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357\"" Jul 10 00:40:27.773358 containerd[1580]: time="2025-07-10T00:40:27.773330786Z" level=info msg="connecting to shim 9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357" address="unix:///run/containerd/s/b31ad66a68a1073cd5d9369ef1b1dbfb7b8c98091af276f0b7ece53fd97dd708" protocol=ttrpc version=3 Jul 10 00:40:27.817171 systemd[1]: Started cri-containerd-9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357.scope - libcontainer container 9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357. Jul 10 00:40:27.860782 containerd[1580]: time="2025-07-10T00:40:27.860749701Z" level=info msg="StartContainer for \"9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357\" returns successfully" Jul 10 00:40:27.931093 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 00:40:27.931197 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 00:40:28.095657 kubelet[2714]: I0710 00:40:28.095220 2714 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2trmx\" (UniqueName: \"kubernetes.io/projected/5ca031b0-0a99-47a5-a801-c80e357a48f6-kube-api-access-2trmx\") pod \"5ca031b0-0a99-47a5-a801-c80e357a48f6\" (UID: \"5ca031b0-0a99-47a5-a801-c80e357a48f6\") " Jul 10 00:40:28.095657 kubelet[2714]: I0710 00:40:28.095267 2714 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ca031b0-0a99-47a5-a801-c80e357a48f6-whisker-backend-key-pair\") pod \"5ca031b0-0a99-47a5-a801-c80e357a48f6\" (UID: \"5ca031b0-0a99-47a5-a801-c80e357a48f6\") " Jul 10 00:40:28.095657 kubelet[2714]: I0710 00:40:28.095291 2714 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ca031b0-0a99-47a5-a801-c80e357a48f6-whisker-ca-bundle\") pod \"5ca031b0-0a99-47a5-a801-c80e357a48f6\" (UID: \"5ca031b0-0a99-47a5-a801-c80e357a48f6\") " Jul 10 00:40:28.096677 kubelet[2714]: I0710 00:40:28.096515 2714 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ca031b0-0a99-47a5-a801-c80e357a48f6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5ca031b0-0a99-47a5-a801-c80e357a48f6" (UID: "5ca031b0-0a99-47a5-a801-c80e357a48f6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:40:28.110798 kubelet[2714]: I0710 00:40:28.110750 2714 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca031b0-0a99-47a5-a801-c80e357a48f6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5ca031b0-0a99-47a5-a801-c80e357a48f6" (UID: "5ca031b0-0a99-47a5-a801-c80e357a48f6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:40:28.110910 kubelet[2714]: I0710 00:40:28.110895 2714 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca031b0-0a99-47a5-a801-c80e357a48f6-kube-api-access-2trmx" (OuterVolumeSpecName: "kube-api-access-2trmx") pod "5ca031b0-0a99-47a5-a801-c80e357a48f6" (UID: "5ca031b0-0a99-47a5-a801-c80e357a48f6"). InnerVolumeSpecName "kube-api-access-2trmx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:40:28.195962 kubelet[2714]: I0710 00:40:28.195910 2714 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ca031b0-0a99-47a5-a801-c80e357a48f6-whisker-backend-key-pair\") on node \"172-238-161-214\" DevicePath \"\"" Jul 10 00:40:28.195962 kubelet[2714]: I0710 00:40:28.195935 2714 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ca031b0-0a99-47a5-a801-c80e357a48f6-whisker-ca-bundle\") on node \"172-238-161-214\" DevicePath \"\"" Jul 10 00:40:28.195962 kubelet[2714]: I0710 00:40:28.195943 2714 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2trmx\" (UniqueName: \"kubernetes.io/projected/5ca031b0-0a99-47a5-a801-c80e357a48f6-kube-api-access-2trmx\") on node \"172-238-161-214\" DevicePath \"\"" Jul 10 00:40:28.420710 systemd[1]: Removed slice kubepods-besteffort-pod5ca031b0_0a99_47a5_a801_c80e357a48f6.slice - libcontainer container kubepods-besteffort-pod5ca031b0_0a99_47a5_a801_c80e357a48f6.slice. Jul 10 00:40:28.556230 kubelet[2714]: I0710 00:40:28.555931 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6g65w" podStartSLOduration=1.600193626 podStartE2EDuration="10.555917273s" podCreationTimestamp="2025-07-10 00:40:18 +0000 UTC" firstStartedPulling="2025-07-10 00:40:18.783238818 +0000 UTC m=+16.455248366" lastFinishedPulling="2025-07-10 00:40:27.738962465 +0000 UTC m=+25.410972013" observedRunningTime="2025-07-10 00:40:28.552857204 +0000 UTC m=+26.224866752" watchObservedRunningTime="2025-07-10 00:40:28.555917273 +0000 UTC m=+26.227926821" Jul 10 00:40:28.611667 systemd[1]: Created slice kubepods-besteffort-pod23a003b1_c3dd_4dba_80ed_9c3f5fd13502.slice - libcontainer container kubepods-besteffort-pod23a003b1_c3dd_4dba_80ed_9c3f5fd13502.slice. Jul 10 00:40:28.700512 kubelet[2714]: I0710 00:40:28.700328 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/23a003b1-c3dd-4dba-80ed-9c3f5fd13502-whisker-backend-key-pair\") pod \"whisker-56cf869bbc-58fn4\" (UID: \"23a003b1-c3dd-4dba-80ed-9c3f5fd13502\") " pod="calico-system/whisker-56cf869bbc-58fn4" Jul 10 00:40:28.700512 kubelet[2714]: I0710 00:40:28.700371 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23a003b1-c3dd-4dba-80ed-9c3f5fd13502-whisker-ca-bundle\") pod \"whisker-56cf869bbc-58fn4\" (UID: \"23a003b1-c3dd-4dba-80ed-9c3f5fd13502\") " pod="calico-system/whisker-56cf869bbc-58fn4" Jul 10 00:40:28.700512 kubelet[2714]: I0710 00:40:28.700387 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz9mv\" (UniqueName: \"kubernetes.io/projected/23a003b1-c3dd-4dba-80ed-9c3f5fd13502-kube-api-access-hz9mv\") pod \"whisker-56cf869bbc-58fn4\" (UID: \"23a003b1-c3dd-4dba-80ed-9c3f5fd13502\") " pod="calico-system/whisker-56cf869bbc-58fn4" Jul 10 00:40:28.708691 systemd[1]: var-lib-kubelet-pods-5ca031b0\x2d0a99\x2d47a5\x2da801\x2dc80e357a48f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2trmx.mount: Deactivated successfully. Jul 10 00:40:28.708795 systemd[1]: var-lib-kubelet-pods-5ca031b0\x2d0a99\x2d47a5\x2da801\x2dc80e357a48f6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 00:40:28.915522 containerd[1580]: time="2025-07-10T00:40:28.915491529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56cf869bbc-58fn4,Uid:23a003b1-c3dd-4dba-80ed-9c3f5fd13502,Namespace:calico-system,Attempt:0,}" Jul 10 00:40:29.033488 systemd-networkd[1458]: cali89af13077f6: Link UP Jul 10 00:40:29.035758 systemd-networkd[1458]: cali89af13077f6: Gained carrier Jul 10 00:40:29.053088 containerd[1580]: 2025-07-10 00:40:28.939 [INFO][3778] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:40:29.053088 containerd[1580]: 2025-07-10 00:40:28.972 [INFO][3778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--214-k8s-whisker--56cf869bbc--58fn4-eth0 whisker-56cf869bbc- calico-system 23a003b1-c3dd-4dba-80ed-9c3f5fd13502 872 0 2025-07-10 00:40:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:56cf869bbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-238-161-214 whisker-56cf869bbc-58fn4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali89af13077f6 [] [] }} ContainerID="e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" Namespace="calico-system" Pod="whisker-56cf869bbc-58fn4" WorkloadEndpoint="172--238--161--214-k8s-whisker--56cf869bbc--58fn4-" Jul 10 00:40:29.053088 containerd[1580]: 2025-07-10 00:40:28.972 [INFO][3778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" Namespace="calico-system" Pod="whisker-56cf869bbc-58fn4" WorkloadEndpoint="172--238--161--214-k8s-whisker--56cf869bbc--58fn4-eth0" Jul 10 00:40:29.053088 containerd[1580]: 2025-07-10 00:40:28.994 [INFO][3788] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" HandleID="k8s-pod-network.e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" Workload="172--238--161--214-k8s-whisker--56cf869bbc--58fn4-eth0" Jul 10 00:40:29.053260 containerd[1580]: 2025-07-10 00:40:28.994 [INFO][3788] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" HandleID="k8s-pod-network.e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" Workload="172--238--161--214-k8s-whisker--56cf869bbc--58fn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-161-214", "pod":"whisker-56cf869bbc-58fn4", "timestamp":"2025-07-10 00:40:28.994224719 +0000 UTC"}, Hostname:"172-238-161-214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:40:29.053260 containerd[1580]: 2025-07-10 00:40:28.994 [INFO][3788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:40:29.053260 containerd[1580]: 2025-07-10 00:40:28.994 [INFO][3788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:40:29.053260 containerd[1580]: 2025-07-10 00:40:28.994 [INFO][3788] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-214' Jul 10 00:40:29.053260 containerd[1580]: 2025-07-10 00:40:29.000 [INFO][3788] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" host="172-238-161-214" Jul 10 00:40:29.053260 containerd[1580]: 2025-07-10 00:40:29.004 [INFO][3788] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-161-214" Jul 10 00:40:29.053260 containerd[1580]: 2025-07-10 00:40:29.008 [INFO][3788] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:29.053260 containerd[1580]: 2025-07-10 00:40:29.009 [INFO][3788] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:29.053260 containerd[1580]: 2025-07-10 00:40:29.011 [INFO][3788] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:29.053260 containerd[1580]: 2025-07-10 00:40:29.011 [INFO][3788] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" host="172-238-161-214" Jul 10 00:40:29.053462 containerd[1580]: 2025-07-10 00:40:29.012 [INFO][3788] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4 Jul 10 00:40:29.053462 containerd[1580]: 2025-07-10 00:40:29.015 [INFO][3788] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" host="172-238-161-214" Jul 10 00:40:29.053462 containerd[1580]: 2025-07-10 00:40:29.018 [INFO][3788] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.1/26] block=192.168.16.0/26 handle="k8s-pod-network.e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" host="172-238-161-214" Jul 10 00:40:29.053462 containerd[1580]: 2025-07-10 00:40:29.018 [INFO][3788] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.1/26] handle="k8s-pod-network.e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" host="172-238-161-214" Jul 10 00:40:29.053462 containerd[1580]: 2025-07-10 00:40:29.018 [INFO][3788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:40:29.053462 containerd[1580]: 2025-07-10 00:40:29.018 [INFO][3788] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.1/26] IPv6=[] ContainerID="e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" HandleID="k8s-pod-network.e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" Workload="172--238--161--214-k8s-whisker--56cf869bbc--58fn4-eth0" Jul 10 00:40:29.053571 containerd[1580]: 2025-07-10 00:40:29.022 [INFO][3778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" Namespace="calico-system" Pod="whisker-56cf869bbc-58fn4" WorkloadEndpoint="172--238--161--214-k8s-whisker--56cf869bbc--58fn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-whisker--56cf869bbc--58fn4-eth0", GenerateName:"whisker-56cf869bbc-", Namespace:"calico-system", SelfLink:"", UID:"23a003b1-c3dd-4dba-80ed-9c3f5fd13502", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56cf869bbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"", Pod:"whisker-56cf869bbc-58fn4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali89af13077f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:29.053571 containerd[1580]: 2025-07-10 00:40:29.022 [INFO][3778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.1/32] ContainerID="e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" Namespace="calico-system" Pod="whisker-56cf869bbc-58fn4" WorkloadEndpoint="172--238--161--214-k8s-whisker--56cf869bbc--58fn4-eth0" Jul 10 00:40:29.053662 containerd[1580]: 2025-07-10 00:40:29.022 [INFO][3778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89af13077f6 ContainerID="e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" Namespace="calico-system" Pod="whisker-56cf869bbc-58fn4" WorkloadEndpoint="172--238--161--214-k8s-whisker--56cf869bbc--58fn4-eth0" Jul 10 00:40:29.053662 containerd[1580]: 2025-07-10 00:40:29.033 [INFO][3778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" Namespace="calico-system" Pod="whisker-56cf869bbc-58fn4" WorkloadEndpoint="172--238--161--214-k8s-whisker--56cf869bbc--58fn4-eth0" Jul 10 00:40:29.053707 containerd[1580]: 2025-07-10 00:40:29.033 [INFO][3778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" Namespace="calico-system" Pod="whisker-56cf869bbc-58fn4" WorkloadEndpoint="172--238--161--214-k8s-whisker--56cf869bbc--58fn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-whisker--56cf869bbc--58fn4-eth0", GenerateName:"whisker-56cf869bbc-", Namespace:"calico-system", SelfLink:"", UID:"23a003b1-c3dd-4dba-80ed-9c3f5fd13502", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56cf869bbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4", Pod:"whisker-56cf869bbc-58fn4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali89af13077f6", MAC:"36:b0:0c:d0:f2:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:29.053752 containerd[1580]: 2025-07-10 00:40:29.044 [INFO][3778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" Namespace="calico-system" Pod="whisker-56cf869bbc-58fn4" WorkloadEndpoint="172--238--161--214-k8s-whisker--56cf869bbc--58fn4-eth0" Jul 10 00:40:29.086051 containerd[1580]: time="2025-07-10T00:40:29.085987846Z" level=info msg="connecting to shim e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4" address="unix:///run/containerd/s/99e31e17731c61b69da6a5226339041ef948ed6cf1a350b4b0d55d7161056a72" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:40:29.110743 systemd[1]: Started cri-containerd-e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4.scope - libcontainer container e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4. Jul 10 00:40:29.159744 containerd[1580]: time="2025-07-10T00:40:29.159694684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56cf869bbc-58fn4,Uid:23a003b1-c3dd-4dba-80ed-9c3f5fd13502,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4\"" Jul 10 00:40:29.162862 containerd[1580]: time="2025-07-10T00:40:29.162578332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 00:40:29.543445 kubelet[2714]: I0710 00:40:29.543403 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:40:30.416369 kubelet[2714]: I0710 00:40:30.416329 2714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca031b0-0a99-47a5-a801-c80e357a48f6" path="/var/lib/kubelet/pods/5ca031b0-0a99-47a5-a801-c80e357a48f6/volumes" Jul 10 00:40:30.471479 containerd[1580]: time="2025-07-10T00:40:30.470665095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:30.473131 containerd[1580]: time="2025-07-10T00:40:30.473113871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 10 00:40:30.473884 containerd[1580]: time="2025-07-10T00:40:30.473838563Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:30.477646 containerd[1580]: time="2025-07-10T00:40:30.476833541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:30.478709 containerd[1580]: time="2025-07-10T00:40:30.478179584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.315563942s" Jul 10 00:40:30.478709 containerd[1580]: time="2025-07-10T00:40:30.478460955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 10 00:40:30.484286 containerd[1580]: time="2025-07-10T00:40:30.484236830Z" level=info msg="CreateContainer within sandbox \"e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 00:40:30.491855 containerd[1580]: time="2025-07-10T00:40:30.491834360Z" level=info msg="Container 9224e3deaa04a3af8af72152a2d22883308f35a7fd7fd8b4604750593b60c161: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:30.498350 containerd[1580]: time="2025-07-10T00:40:30.497555984Z" level=info msg="CreateContainer within sandbox \"e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9224e3deaa04a3af8af72152a2d22883308f35a7fd7fd8b4604750593b60c161\"" Jul 10 00:40:30.499561 containerd[1580]: time="2025-07-10T00:40:30.499523549Z" level=info msg="StartContainer for \"9224e3deaa04a3af8af72152a2d22883308f35a7fd7fd8b4604750593b60c161\"" Jul 10 00:40:30.501857 containerd[1580]: time="2025-07-10T00:40:30.501715975Z" level=info msg="connecting to shim 9224e3deaa04a3af8af72152a2d22883308f35a7fd7fd8b4604750593b60c161" address="unix:///run/containerd/s/99e31e17731c61b69da6a5226339041ef948ed6cf1a350b4b0d55d7161056a72" protocol=ttrpc version=3 Jul 10 00:40:30.534826 systemd[1]: Started cri-containerd-9224e3deaa04a3af8af72152a2d22883308f35a7fd7fd8b4604750593b60c161.scope - libcontainer container 9224e3deaa04a3af8af72152a2d22883308f35a7fd7fd8b4604750593b60c161. Jul 10 00:40:30.607207 containerd[1580]: time="2025-07-10T00:40:30.607168937Z" level=info msg="StartContainer for \"9224e3deaa04a3af8af72152a2d22883308f35a7fd7fd8b4604750593b60c161\" returns successfully" Jul 10 00:40:30.608602 containerd[1580]: time="2025-07-10T00:40:30.608476760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 00:40:30.881819 systemd-networkd[1458]: cali89af13077f6: Gained IPv6LL Jul 10 00:40:31.780038 kubelet[2714]: I0710 00:40:31.779999 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:40:31.858381 containerd[1580]: time="2025-07-10T00:40:31.858341370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357\" id:\"0c819ed884176676ecd1f7afa65fb43f447dff86a4de70b097c8f6da6b59748b\" pid:4039 exit_status:1 exited_at:{seconds:1752108031 nanos:858104489}" Jul 10 00:40:31.953051 containerd[1580]: time="2025-07-10T00:40:31.953001175Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357\" id:\"26998f5f36b86cb3ffcfd1c00be313b6a61b8172866fd1d8a73e479683248591\" pid:4062 exit_status:1 exited_at:{seconds:1752108031 nanos:952675943}" Jul 10 00:40:32.171206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1430964259.mount: Deactivated successfully. Jul 10 00:40:32.182170 containerd[1580]: time="2025-07-10T00:40:32.182135597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:32.182779 containerd[1580]: time="2025-07-10T00:40:32.182743297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 10 00:40:32.183422 containerd[1580]: time="2025-07-10T00:40:32.183224829Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:32.184677 containerd[1580]: time="2025-07-10T00:40:32.184655872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:32.185556 containerd[1580]: time="2025-07-10T00:40:32.185222594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.576719404s" Jul 10 00:40:32.185556 containerd[1580]: time="2025-07-10T00:40:32.185256274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 10 00:40:32.188954 containerd[1580]: time="2025-07-10T00:40:32.188911502Z" level=info msg="CreateContainer within sandbox \"e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 00:40:32.196869 containerd[1580]: time="2025-07-10T00:40:32.196826521Z" level=info msg="Container 4af3bfda0372a515e6a41565cf6a97625602d66cc239f5fc0fba4861be49763d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:32.199076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1803122818.mount: Deactivated successfully. Jul 10 00:40:32.208676 containerd[1580]: time="2025-07-10T00:40:32.208649929Z" level=info msg="CreateContainer within sandbox \"e9f584479a831e5164325f77fd71138ee59ef85f82206721da525a4934a09ff4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4af3bfda0372a515e6a41565cf6a97625602d66cc239f5fc0fba4861be49763d\"" Jul 10 00:40:32.209840 containerd[1580]: time="2025-07-10T00:40:32.209811002Z" level=info msg="StartContainer for \"4af3bfda0372a515e6a41565cf6a97625602d66cc239f5fc0fba4861be49763d\"" Jul 10 00:40:32.210831 containerd[1580]: time="2025-07-10T00:40:32.210800864Z" level=info msg="connecting to shim 4af3bfda0372a515e6a41565cf6a97625602d66cc239f5fc0fba4861be49763d" address="unix:///run/containerd/s/99e31e17731c61b69da6a5226339041ef948ed6cf1a350b4b0d55d7161056a72" protocol=ttrpc version=3 Jul 10 00:40:32.233736 systemd[1]: Started cri-containerd-4af3bfda0372a515e6a41565cf6a97625602d66cc239f5fc0fba4861be49763d.scope - libcontainer container 4af3bfda0372a515e6a41565cf6a97625602d66cc239f5fc0fba4861be49763d. Jul 10 00:40:32.280519 containerd[1580]: time="2025-07-10T00:40:32.280483071Z" level=info msg="StartContainer for \"4af3bfda0372a515e6a41565cf6a97625602d66cc239f5fc0fba4861be49763d\" returns successfully" Jul 10 00:40:34.414669 containerd[1580]: time="2025-07-10T00:40:34.414386320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6647c9884-zpnlk,Uid:09adf70f-5efc-4b9b-a226-b3a115bd24f5,Namespace:calico-system,Attempt:0,}" Jul 10 00:40:34.513011 systemd-networkd[1458]: cali223cb9d52ac: Link UP Jul 10 00:40:34.513216 systemd-networkd[1458]: cali223cb9d52ac: Gained carrier Jul 10 00:40:34.519613 kubelet[2714]: I0710 00:40:34.518604 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-56cf869bbc-58fn4" podStartSLOduration=3.494802688 podStartE2EDuration="6.518587813s" podCreationTimestamp="2025-07-10 00:40:28 +0000 UTC" firstStartedPulling="2025-07-10 00:40:29.16216799 +0000 UTC m=+26.834177538" lastFinishedPulling="2025-07-10 00:40:32.185953115 +0000 UTC m=+29.857962663" observedRunningTime="2025-07-10 00:40:32.573683381 +0000 UTC m=+30.245692939" watchObservedRunningTime="2025-07-10 00:40:34.518587813 +0000 UTC m=+32.190597361" Jul 10 00:40:34.523939 containerd[1580]: 2025-07-10 00:40:34.440 [INFO][4153] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:40:34.523939 containerd[1580]: 2025-07-10 00:40:34.453 [INFO][4153] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-eth0 calico-kube-controllers-6647c9884- calico-system 09adf70f-5efc-4b9b-a226-b3a115bd24f5 798 0 2025-07-10 00:40:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6647c9884 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-238-161-214 calico-kube-controllers-6647c9884-zpnlk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali223cb9d52ac [] [] }} ContainerID="c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" Namespace="calico-system" Pod="calico-kube-controllers-6647c9884-zpnlk" WorkloadEndpoint="172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-" Jul 10 00:40:34.523939 containerd[1580]: 2025-07-10 00:40:34.453 [INFO][4153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" Namespace="calico-system" Pod="calico-kube-controllers-6647c9884-zpnlk" WorkloadEndpoint="172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-eth0" Jul 10 00:40:34.523939 containerd[1580]: 2025-07-10 00:40:34.476 [INFO][4164] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" HandleID="k8s-pod-network.c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" Workload="172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-eth0" Jul 10 00:40:34.524125 containerd[1580]: 2025-07-10 00:40:34.476 [INFO][4164] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" HandleID="k8s-pod-network.c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" Workload="172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-161-214", "pod":"calico-kube-controllers-6647c9884-zpnlk", "timestamp":"2025-07-10 00:40:34.476322448 +0000 UTC"}, Hostname:"172-238-161-214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:40:34.524125 containerd[1580]: 2025-07-10 00:40:34.476 [INFO][4164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:40:34.524125 containerd[1580]: 2025-07-10 00:40:34.476 [INFO][4164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:40:34.524125 containerd[1580]: 2025-07-10 00:40:34.476 [INFO][4164] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-214' Jul 10 00:40:34.524125 containerd[1580]: 2025-07-10 00:40:34.482 [INFO][4164] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" host="172-238-161-214" Jul 10 00:40:34.524125 containerd[1580]: 2025-07-10 00:40:34.486 [INFO][4164] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-161-214" Jul 10 00:40:34.524125 containerd[1580]: 2025-07-10 00:40:34.491 [INFO][4164] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:34.524125 containerd[1580]: 2025-07-10 00:40:34.494 [INFO][4164] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:34.524125 containerd[1580]: 2025-07-10 00:40:34.496 [INFO][4164] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:34.524389 containerd[1580]: 2025-07-10 00:40:34.496 [INFO][4164] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" host="172-238-161-214" Jul 10 00:40:34.524389 containerd[1580]: 2025-07-10 00:40:34.497 [INFO][4164] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4 Jul 10 00:40:34.524389 containerd[1580]: 2025-07-10 00:40:34.500 [INFO][4164] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" host="172-238-161-214" Jul 10 00:40:34.524389 containerd[1580]: 2025-07-10 00:40:34.504 [INFO][4164] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.2/26] block=192.168.16.0/26 handle="k8s-pod-network.c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" host="172-238-161-214" Jul 10 00:40:34.524389 containerd[1580]: 2025-07-10 00:40:34.504 [INFO][4164] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.2/26] handle="k8s-pod-network.c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" host="172-238-161-214" Jul 10 00:40:34.524389 containerd[1580]: 2025-07-10 00:40:34.504 [INFO][4164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:40:34.524389 containerd[1580]: 2025-07-10 00:40:34.504 [INFO][4164] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.2/26] IPv6=[] ContainerID="c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" HandleID="k8s-pod-network.c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" Workload="172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-eth0" Jul 10 00:40:34.524541 containerd[1580]: 2025-07-10 00:40:34.507 [INFO][4153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" Namespace="calico-system" Pod="calico-kube-controllers-6647c9884-zpnlk" WorkloadEndpoint="172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-eth0", GenerateName:"calico-kube-controllers-6647c9884-", Namespace:"calico-system", SelfLink:"", UID:"09adf70f-5efc-4b9b-a226-b3a115bd24f5", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6647c9884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"", Pod:"calico-kube-controllers-6647c9884-zpnlk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali223cb9d52ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:34.524594 containerd[1580]: 2025-07-10 00:40:34.507 [INFO][4153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.2/32] ContainerID="c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" Namespace="calico-system" Pod="calico-kube-controllers-6647c9884-zpnlk" WorkloadEndpoint="172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-eth0" Jul 10 00:40:34.524594 containerd[1580]: 2025-07-10 00:40:34.507 [INFO][4153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali223cb9d52ac ContainerID="c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" Namespace="calico-system" Pod="calico-kube-controllers-6647c9884-zpnlk" WorkloadEndpoint="172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-eth0" Jul 10 00:40:34.524594 containerd[1580]: 2025-07-10 00:40:34.510 [INFO][4153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" Namespace="calico-system" Pod="calico-kube-controllers-6647c9884-zpnlk" WorkloadEndpoint="172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-eth0" Jul 10 00:40:34.525730 containerd[1580]: 2025-07-10 00:40:34.510 [INFO][4153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" Namespace="calico-system" Pod="calico-kube-controllers-6647c9884-zpnlk" WorkloadEndpoint="172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-eth0", GenerateName:"calico-kube-controllers-6647c9884-", Namespace:"calico-system", SelfLink:"", UID:"09adf70f-5efc-4b9b-a226-b3a115bd24f5", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6647c9884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4", Pod:"calico-kube-controllers-6647c9884-zpnlk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali223cb9d52ac", MAC:"da:63:06:be:a9:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:34.525840 containerd[1580]: 2025-07-10 00:40:34.520 [INFO][4153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" Namespace="calico-system" Pod="calico-kube-controllers-6647c9884-zpnlk" WorkloadEndpoint="172--238--161--214-k8s-calico--kube--controllers--6647c9884--zpnlk-eth0" Jul 10 00:40:34.554390 containerd[1580]: time="2025-07-10T00:40:34.553980831Z" level=info msg="connecting to shim c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4" address="unix:///run/containerd/s/d2a84f2eeddb089421ffca7976cda556668e4b35211637e8a0afebc1e73d9a0e" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:40:34.585813 systemd[1]: Started cri-containerd-c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4.scope - libcontainer container c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4. Jul 10 00:40:34.629088 containerd[1580]: time="2025-07-10T00:40:34.629028937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6647c9884-zpnlk,Uid:09adf70f-5efc-4b9b-a226-b3a115bd24f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4\"" Jul 10 00:40:34.631267 containerd[1580]: time="2025-07-10T00:40:34.631231392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 00:40:35.414828 kubelet[2714]: E0710 00:40:35.414564 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:35.415965 containerd[1580]: time="2025-07-10T00:40:35.415921313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fc97t,Uid:c8820429-a497-45f4-97be-845bf5122a43,Namespace:kube-system,Attempt:0,}" Jul 10 00:40:35.556947 systemd-networkd[1458]: calibcd9623b54d: Link UP Jul 10 00:40:35.557245 systemd-networkd[1458]: calibcd9623b54d: Gained carrier Jul 10 00:40:35.577287 containerd[1580]: 2025-07-10 00:40:35.477 [INFO][4247] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:40:35.577287 containerd[1580]: 2025-07-10 00:40:35.490 [INFO][4247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-eth0 coredns-674b8bbfcf- kube-system c8820429-a497-45f4-97be-845bf5122a43 804 0 2025-07-10 00:40:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-238-161-214 coredns-674b8bbfcf-fc97t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibcd9623b54d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc97t" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-" Jul 10 00:40:35.577287 containerd[1580]: 2025-07-10 00:40:35.490 [INFO][4247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc97t" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-eth0" Jul 10 00:40:35.577287 containerd[1580]: 2025-07-10 00:40:35.518 [INFO][4259] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" HandleID="k8s-pod-network.732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" Workload="172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-eth0" Jul 10 00:40:35.577550 containerd[1580]: 2025-07-10 00:40:35.518 [INFO][4259] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" HandleID="k8s-pod-network.732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" Workload="172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1e0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-238-161-214", "pod":"coredns-674b8bbfcf-fc97t", "timestamp":"2025-07-10 00:40:35.518195162 +0000 UTC"}, Hostname:"172-238-161-214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:40:35.577550 containerd[1580]: 2025-07-10 00:40:35.518 [INFO][4259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:40:35.577550 containerd[1580]: 2025-07-10 00:40:35.518 [INFO][4259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:40:35.577550 containerd[1580]: 2025-07-10 00:40:35.518 [INFO][4259] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-214' Jul 10 00:40:35.577550 containerd[1580]: 2025-07-10 00:40:35.525 [INFO][4259] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" host="172-238-161-214" Jul 10 00:40:35.577550 containerd[1580]: 2025-07-10 00:40:35.530 [INFO][4259] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-161-214" Jul 10 00:40:35.577550 containerd[1580]: 2025-07-10 00:40:35.534 [INFO][4259] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:35.577550 containerd[1580]: 2025-07-10 00:40:35.536 [INFO][4259] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:35.577550 containerd[1580]: 2025-07-10 00:40:35.538 [INFO][4259] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:35.577550 containerd[1580]: 2025-07-10 00:40:35.538 [INFO][4259] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" host="172-238-161-214" Jul 10 00:40:35.577781 containerd[1580]: 2025-07-10 00:40:35.540 [INFO][4259] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a Jul 10 00:40:35.577781 containerd[1580]: 2025-07-10 00:40:35.543 [INFO][4259] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" host="172-238-161-214" Jul 10 00:40:35.577781 containerd[1580]: 2025-07-10 00:40:35.550 [INFO][4259] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.3/26] block=192.168.16.0/26 handle="k8s-pod-network.732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" host="172-238-161-214" Jul 10 00:40:35.577781 containerd[1580]: 2025-07-10 00:40:35.550 [INFO][4259] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.3/26] handle="k8s-pod-network.732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" host="172-238-161-214" Jul 10 00:40:35.577781 containerd[1580]: 2025-07-10 00:40:35.550 [INFO][4259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:40:35.577781 containerd[1580]: 2025-07-10 00:40:35.550 [INFO][4259] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.3/26] IPv6=[] ContainerID="732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" HandleID="k8s-pod-network.732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" Workload="172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-eth0" Jul 10 00:40:35.577892 containerd[1580]: 2025-07-10 00:40:35.553 [INFO][4247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc97t" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8820429-a497-45f4-97be-845bf5122a43", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"", Pod:"coredns-674b8bbfcf-fc97t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcd9623b54d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:35.577892 containerd[1580]: 2025-07-10 00:40:35.553 [INFO][4247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.3/32] ContainerID="732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc97t" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-eth0" Jul 10 00:40:35.577892 containerd[1580]: 2025-07-10 00:40:35.553 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibcd9623b54d ContainerID="732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc97t" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-eth0" Jul 10 00:40:35.577892 containerd[1580]: 2025-07-10 00:40:35.559 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc97t" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-eth0" Jul 10 00:40:35.577892 containerd[1580]: 2025-07-10 00:40:35.560 [INFO][4247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc97t" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8820429-a497-45f4-97be-845bf5122a43", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a", Pod:"coredns-674b8bbfcf-fc97t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcd9623b54d", MAC:"6e:21:0f:74:f3:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:35.577892 containerd[1580]: 2025-07-10 00:40:35.573 [INFO][4247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc97t" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--fc97t-eth0" Jul 10 00:40:35.604817 containerd[1580]: time="2025-07-10T00:40:35.604615397Z" level=info msg="connecting to shim 732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a" address="unix:///run/containerd/s/d024f74291112fb58e4e932d81c493af60f8d3a7bcffdd2065327d8ceaedc1c5" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:40:35.635797 systemd[1]: Started cri-containerd-732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a.scope - libcontainer container 732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a. Jul 10 00:40:35.689464 containerd[1580]: time="2025-07-10T00:40:35.689246519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fc97t,Uid:c8820429-a497-45f4-97be-845bf5122a43,Namespace:kube-system,Attempt:0,} returns sandbox id \"732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a\"" Jul 10 00:40:35.689822 kubelet[2714]: E0710 00:40:35.689796 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:35.693909 containerd[1580]: time="2025-07-10T00:40:35.693850159Z" level=info msg="CreateContainer within sandbox \"732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:40:35.702994 containerd[1580]: time="2025-07-10T00:40:35.702960718Z" level=info msg="Container b8d62c4ac6c9add81809323aec9c19bc9c711c32eb100a4cbda527b6d33e7f7e: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:35.710716 containerd[1580]: time="2025-07-10T00:40:35.710691264Z" level=info msg="CreateContainer within sandbox \"732ae3de4d4c5fbd22aeabe298a28652d0b3fe9d11d6c5b0becb783cf866788a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8d62c4ac6c9add81809323aec9c19bc9c711c32eb100a4cbda527b6d33e7f7e\"" Jul 10 00:40:35.713647 containerd[1580]: time="2025-07-10T00:40:35.712782528Z" level=info msg="StartContainer for \"b8d62c4ac6c9add81809323aec9c19bc9c711c32eb100a4cbda527b6d33e7f7e\"" Jul 10 00:40:35.713647 containerd[1580]: time="2025-07-10T00:40:35.713462900Z" level=info msg="connecting to shim b8d62c4ac6c9add81809323aec9c19bc9c711c32eb100a4cbda527b6d33e7f7e" address="unix:///run/containerd/s/d024f74291112fb58e4e932d81c493af60f8d3a7bcffdd2065327d8ceaedc1c5" protocol=ttrpc version=3 Jul 10 00:40:35.736864 systemd[1]: Started cri-containerd-b8d62c4ac6c9add81809323aec9c19bc9c711c32eb100a4cbda527b6d33e7f7e.scope - libcontainer container b8d62c4ac6c9add81809323aec9c19bc9c711c32eb100a4cbda527b6d33e7f7e. Jul 10 00:40:35.779489 containerd[1580]: time="2025-07-10T00:40:35.779431662Z" level=info msg="StartContainer for \"b8d62c4ac6c9add81809323aec9c19bc9c711c32eb100a4cbda527b6d33e7f7e\" returns successfully" Jul 10 00:40:36.257759 systemd-networkd[1458]: cali223cb9d52ac: Gained IPv6LL Jul 10 00:40:36.415346 containerd[1580]: time="2025-07-10T00:40:36.415051995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fjcsh,Uid:db18eeab-67e9-4648-b07d-c6930950a832,Namespace:calico-system,Attempt:0,}" Jul 10 00:40:36.514079 systemd-networkd[1458]: calie7d2590b2f0: Link UP Jul 10 00:40:36.515555 systemd-networkd[1458]: calie7d2590b2f0: Gained carrier Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.441 [INFO][4369] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.451 [INFO][4369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-eth0 goldmane-768f4c5c69- calico-system db18eeab-67e9-4648-b07d-c6930950a832 805 0 2025-07-10 00:40:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-238-161-214 goldmane-768f4c5c69-fjcsh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie7d2590b2f0 [] [] }} ContainerID="6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" Namespace="calico-system" Pod="goldmane-768f4c5c69-fjcsh" WorkloadEndpoint="172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-" Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.451 [INFO][4369] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" Namespace="calico-system" Pod="goldmane-768f4c5c69-fjcsh" WorkloadEndpoint="172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-eth0" Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.478 [INFO][4381] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" HandleID="k8s-pod-network.6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" Workload="172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-eth0" Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.478 [INFO][4381] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" HandleID="k8s-pod-network.6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" Workload="172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f870), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-161-214", "pod":"goldmane-768f4c5c69-fjcsh", "timestamp":"2025-07-10 00:40:36.478191556 +0000 UTC"}, Hostname:"172-238-161-214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.478 [INFO][4381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.478 [INFO][4381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.478 [INFO][4381] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-214' Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.485 [INFO][4381] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" host="172-238-161-214" Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.490 [INFO][4381] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-161-214" Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.494 [INFO][4381] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.496 [INFO][4381] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.498 [INFO][4381] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.498 [INFO][4381] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" host="172-238-161-214" Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.499 [INFO][4381] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46 Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.504 [INFO][4381] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" host="172-238-161-214" Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.509 [INFO][4381] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.4/26] block=192.168.16.0/26 handle="k8s-pod-network.6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" host="172-238-161-214" Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.509 [INFO][4381] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.4/26] handle="k8s-pod-network.6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" host="172-238-161-214" Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.509 [INFO][4381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:40:36.529369 containerd[1580]: 2025-07-10 00:40:36.509 [INFO][4381] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.4/26] IPv6=[] ContainerID="6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" HandleID="k8s-pod-network.6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" Workload="172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-eth0" Jul 10 00:40:36.530322 containerd[1580]: 2025-07-10 00:40:36.512 [INFO][4369] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" Namespace="calico-system" Pod="goldmane-768f4c5c69-fjcsh" WorkloadEndpoint="172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"db18eeab-67e9-4648-b07d-c6930950a832", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"", Pod:"goldmane-768f4c5c69-fjcsh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie7d2590b2f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:36.530322 containerd[1580]: 2025-07-10 00:40:36.512 [INFO][4369] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.4/32] ContainerID="6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" Namespace="calico-system" Pod="goldmane-768f4c5c69-fjcsh" WorkloadEndpoint="172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-eth0" Jul 10 00:40:36.530322 containerd[1580]: 2025-07-10 00:40:36.512 [INFO][4369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7d2590b2f0 ContainerID="6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" Namespace="calico-system" Pod="goldmane-768f4c5c69-fjcsh" WorkloadEndpoint="172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-eth0" Jul 10 00:40:36.530322 containerd[1580]: 2025-07-10 00:40:36.514 [INFO][4369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" Namespace="calico-system" Pod="goldmane-768f4c5c69-fjcsh" WorkloadEndpoint="172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-eth0" Jul 10 00:40:36.530322 containerd[1580]: 2025-07-10 00:40:36.514 [INFO][4369] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" Namespace="calico-system" Pod="goldmane-768f4c5c69-fjcsh" WorkloadEndpoint="172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"db18eeab-67e9-4648-b07d-c6930950a832", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46", Pod:"goldmane-768f4c5c69-fjcsh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie7d2590b2f0", MAC:"ea:f2:c9:ed:6e:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:36.530322 containerd[1580]: 2025-07-10 00:40:36.525 [INFO][4369] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" Namespace="calico-system" Pod="goldmane-768f4c5c69-fjcsh" WorkloadEndpoint="172--238--161--214-k8s-goldmane--768f4c5c69--fjcsh-eth0" Jul 10 00:40:36.556683 containerd[1580]: time="2025-07-10T00:40:36.556648618Z" level=info msg="connecting to shim 6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46" address="unix:///run/containerd/s/162256716e0603de3bb3952fd9f4a8e1ba91c80892d7e68c894995163a87945f" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:40:36.577034 kubelet[2714]: E0710 00:40:36.576995 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:36.593908 systemd[1]: Started cri-containerd-6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46.scope - libcontainer container 6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46. Jul 10 00:40:36.605363 kubelet[2714]: I0710 00:40:36.604919 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fc97t" podStartSLOduration=28.604904948 podStartE2EDuration="28.604904948s" podCreationTimestamp="2025-07-10 00:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:40:36.590697729 +0000 UTC m=+34.262707277" watchObservedRunningTime="2025-07-10 00:40:36.604904948 +0000 UTC m=+34.276914496" Jul 10 00:40:36.680865 containerd[1580]: time="2025-07-10T00:40:36.680832775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fjcsh,Uid:db18eeab-67e9-4648-b07d-c6930950a832,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46\"" Jul 10 00:40:37.414847 containerd[1580]: time="2025-07-10T00:40:37.414640259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84597f7d9f-264zs,Uid:a3302309-0908-4122-b93a-2aa48230a086,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:40:37.474920 systemd-networkd[1458]: calibcd9623b54d: Gained IPv6LL Jul 10 00:40:37.567527 systemd-networkd[1458]: cali68889949361: Link UP Jul 10 00:40:37.568799 systemd-networkd[1458]: cali68889949361: Gained carrier Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.454 [INFO][4472] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.467 [INFO][4472] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-eth0 calico-apiserver-84597f7d9f- calico-apiserver a3302309-0908-4122-b93a-2aa48230a086 806 0 2025-07-10 00:40:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84597f7d9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-238-161-214 calico-apiserver-84597f7d9f-264zs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali68889949361 [] [] }} ContainerID="002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-264zs" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-" Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.467 [INFO][4472] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-264zs" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-eth0" Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.507 [INFO][4484] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" HandleID="k8s-pod-network.002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" Workload="172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-eth0" Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.509 [INFO][4484] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" HandleID="k8s-pod-network.002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" Workload="172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-238-161-214", "pod":"calico-apiserver-84597f7d9f-264zs", "timestamp":"2025-07-10 00:40:37.507096454 +0000 UTC"}, Hostname:"172-238-161-214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.509 [INFO][4484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.509 [INFO][4484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.509 [INFO][4484] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-214' Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.523 [INFO][4484] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" host="172-238-161-214" Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.533 [INFO][4484] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-161-214" Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.541 [INFO][4484] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.543 [INFO][4484] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.548 [INFO][4484] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.549 [INFO][4484] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" host="172-238-161-214" Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.550 [INFO][4484] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.554 [INFO][4484] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" host="172-238-161-214" Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.561 [INFO][4484] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.5/26] block=192.168.16.0/26 handle="k8s-pod-network.002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" host="172-238-161-214" Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.561 [INFO][4484] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.5/26] handle="k8s-pod-network.002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" host="172-238-161-214" Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.562 [INFO][4484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:40:37.580394 containerd[1580]: 2025-07-10 00:40:37.562 [INFO][4484] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.5/26] IPv6=[] ContainerID="002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" HandleID="k8s-pod-network.002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" Workload="172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-eth0" Jul 10 00:40:37.583332 containerd[1580]: 2025-07-10 00:40:37.564 [INFO][4472] cni-plugin/k8s.go 418: Populated endpoint ContainerID="002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-264zs" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-eth0", GenerateName:"calico-apiserver-84597f7d9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a3302309-0908-4122-b93a-2aa48230a086", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84597f7d9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"", Pod:"calico-apiserver-84597f7d9f-264zs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68889949361", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:37.583332 containerd[1580]: 2025-07-10 00:40:37.564 [INFO][4472] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.5/32] ContainerID="002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-264zs" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-eth0" Jul 10 00:40:37.583332 containerd[1580]: 2025-07-10 00:40:37.564 [INFO][4472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68889949361 ContainerID="002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-264zs" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-eth0" Jul 10 00:40:37.583332 containerd[1580]: 2025-07-10 00:40:37.566 [INFO][4472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-264zs" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-eth0" Jul 10 00:40:37.583332 containerd[1580]: 2025-07-10 00:40:37.566 [INFO][4472] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-264zs" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-eth0", GenerateName:"calico-apiserver-84597f7d9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a3302309-0908-4122-b93a-2aa48230a086", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84597f7d9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b", Pod:"calico-apiserver-84597f7d9f-264zs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68889949361", MAC:"ae:f4:98:8b:1c:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:37.583332 containerd[1580]: 2025-07-10 00:40:37.575 [INFO][4472] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-264zs" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--264zs-eth0" Jul 10 00:40:37.586580 kubelet[2714]: E0710 00:40:37.586528 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:37.603226 systemd-networkd[1458]: calie7d2590b2f0: Gained IPv6LL Jul 10 00:40:37.612401 containerd[1580]: time="2025-07-10T00:40:37.611863984Z" level=info msg="connecting to shim 002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b" address="unix:///run/containerd/s/ceb88f7a2b7f9e5cee6e5e21d6f39f8c94f481684b9431b3b6c81516feb5a7c3" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:40:37.646796 systemd[1]: Started cri-containerd-002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b.scope - libcontainer container 002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b. Jul 10 00:40:37.720145 containerd[1580]: time="2025-07-10T00:40:37.720115511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84597f7d9f-264zs,Uid:a3302309-0908-4122-b93a-2aa48230a086,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b\"" Jul 10 00:40:37.729830 containerd[1580]: time="2025-07-10T00:40:37.729794451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:37.730846 containerd[1580]: time="2025-07-10T00:40:37.730811293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 10 00:40:37.731351 containerd[1580]: time="2025-07-10T00:40:37.731315694Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:37.733372 containerd[1580]: time="2025-07-10T00:40:37.733305578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:37.733779 containerd[1580]: time="2025-07-10T00:40:37.733731109Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.102470837s" Jul 10 00:40:37.733779 containerd[1580]: time="2025-07-10T00:40:37.733761229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 10 00:40:37.736648 containerd[1580]: time="2025-07-10T00:40:37.735250482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 00:40:37.750124 containerd[1580]: time="2025-07-10T00:40:37.750088811Z" level=info msg="CreateContainer within sandbox \"c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 00:40:37.756701 containerd[1580]: time="2025-07-10T00:40:37.756672625Z" level=info msg="Container 56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:37.761028 containerd[1580]: time="2025-07-10T00:40:37.760991713Z" level=info msg="CreateContainer within sandbox \"c77dd14647a61e2314491f48d8104053387feaee8f707926b3c4ffa2bb6e6ca4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\"" Jul 10 00:40:37.762611 containerd[1580]: time="2025-07-10T00:40:37.761740105Z" level=info msg="StartContainer for \"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\"" Jul 10 00:40:37.762814 containerd[1580]: time="2025-07-10T00:40:37.762774107Z" level=info msg="connecting to shim 56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05" address="unix:///run/containerd/s/d2a84f2eeddb089421ffca7976cda556668e4b35211637e8a0afebc1e73d9a0e" protocol=ttrpc version=3 Jul 10 00:40:37.783747 systemd[1]: Started cri-containerd-56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05.scope - libcontainer container 56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05. Jul 10 00:40:37.830078 containerd[1580]: time="2025-07-10T00:40:37.830054362Z" level=info msg="StartContainer for \"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" returns successfully" Jul 10 00:40:38.414961 containerd[1580]: time="2025-07-10T00:40:38.414909559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84597f7d9f-fzrmq,Uid:c36c8077-0d0f-4e6b-8dab-82cc1c2f788f,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:40:38.573172 systemd-networkd[1458]: calif7512d62ad2: Link UP Jul 10 00:40:38.575879 systemd-networkd[1458]: calif7512d62ad2: Gained carrier Jul 10 00:40:38.600312 kubelet[2714]: E0710 00:40:38.600291 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.447 [INFO][4606] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.463 [INFO][4606] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-eth0 calico-apiserver-84597f7d9f- calico-apiserver c36c8077-0d0f-4e6b-8dab-82cc1c2f788f 801 0 2025-07-10 00:40:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84597f7d9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-238-161-214 calico-apiserver-84597f7d9f-fzrmq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif7512d62ad2 [] [] }} ContainerID="c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-fzrmq" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-" Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.463 [INFO][4606] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-fzrmq" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-eth0" Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.498 [INFO][4617] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" HandleID="k8s-pod-network.c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" Workload="172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-eth0" Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.499 [INFO][4617] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" HandleID="k8s-pod-network.c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" Workload="172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-238-161-214", "pod":"calico-apiserver-84597f7d9f-fzrmq", "timestamp":"2025-07-10 00:40:38.498586791 +0000 UTC"}, Hostname:"172-238-161-214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.499 [INFO][4617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.499 [INFO][4617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.499 [INFO][4617] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-214' Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.513 [INFO][4617] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" host="172-238-161-214" Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.519 [INFO][4617] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-161-214" Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.525 [INFO][4617] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.528 [INFO][4617] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.531 [INFO][4617] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.532 [INFO][4617] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" host="172-238-161-214" Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.535 [INFO][4617] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4 Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.544 [INFO][4617] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" host="172-238-161-214" Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.561 [INFO][4617] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.6/26] block=192.168.16.0/26 handle="k8s-pod-network.c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" host="172-238-161-214" Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.561 [INFO][4617] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.6/26] handle="k8s-pod-network.c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" host="172-238-161-214" Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.563 [INFO][4617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:40:38.607064 containerd[1580]: 2025-07-10 00:40:38.563 [INFO][4617] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.6/26] IPv6=[] ContainerID="c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" HandleID="k8s-pod-network.c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" Workload="172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-eth0" Jul 10 00:40:38.608672 containerd[1580]: 2025-07-10 00:40:38.568 [INFO][4606] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-fzrmq" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-eth0", GenerateName:"calico-apiserver-84597f7d9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c36c8077-0d0f-4e6b-8dab-82cc1c2f788f", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84597f7d9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"", Pod:"calico-apiserver-84597f7d9f-fzrmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7512d62ad2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:38.608672 containerd[1580]: 2025-07-10 00:40:38.568 [INFO][4606] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.6/32] ContainerID="c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-fzrmq" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-eth0" Jul 10 00:40:38.608672 containerd[1580]: 2025-07-10 00:40:38.569 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7512d62ad2 ContainerID="c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-fzrmq" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-eth0" Jul 10 00:40:38.608672 containerd[1580]: 2025-07-10 00:40:38.574 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-fzrmq" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-eth0" Jul 10 00:40:38.608672 containerd[1580]: 2025-07-10 00:40:38.574 [INFO][4606] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-fzrmq" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-eth0", GenerateName:"calico-apiserver-84597f7d9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c36c8077-0d0f-4e6b-8dab-82cc1c2f788f", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84597f7d9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4", Pod:"calico-apiserver-84597f7d9f-fzrmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7512d62ad2", MAC:"ce:9a:b7:ff:1a:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:38.608672 containerd[1580]: 2025-07-10 00:40:38.589 [INFO][4606] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" Namespace="calico-apiserver" Pod="calico-apiserver-84597f7d9f-fzrmq" WorkloadEndpoint="172--238--161--214-k8s-calico--apiserver--84597f7d9f--fzrmq-eth0" Jul 10 00:40:38.616411 kubelet[2714]: I0710 00:40:38.616347 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6647c9884-zpnlk" podStartSLOduration=17.511932279 podStartE2EDuration="20.616335119s" podCreationTimestamp="2025-07-10 00:40:18 +0000 UTC" firstStartedPulling="2025-07-10 00:40:34.630568051 +0000 UTC m=+32.302577599" lastFinishedPulling="2025-07-10 00:40:37.734970891 +0000 UTC m=+35.406980439" observedRunningTime="2025-07-10 00:40:38.615092827 +0000 UTC m=+36.287102385" watchObservedRunningTime="2025-07-10 00:40:38.616335119 +0000 UTC m=+36.288344667" Jul 10 00:40:38.649959 containerd[1580]: time="2025-07-10T00:40:38.649903435Z" level=info msg="connecting to shim c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4" address="unix:///run/containerd/s/65909ecd230a78a9602354fbc6377ac208779847f0989e5b49b708ea38931115" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:40:38.683746 systemd[1]: Started cri-containerd-c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4.scope - libcontainer container c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4. Jul 10 00:40:38.756838 containerd[1580]: time="2025-07-10T00:40:38.756701783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84597f7d9f-fzrmq,Uid:c36c8077-0d0f-4e6b-8dab-82cc1c2f788f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4\"" Jul 10 00:40:39.130429 kubelet[2714]: I0710 00:40:39.130329 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:40:39.131538 kubelet[2714]: E0710 00:40:39.131478 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:39.189543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2249957026.mount: Deactivated successfully. Jul 10 00:40:39.414459 kubelet[2714]: E0710 00:40:39.414254 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:39.415057 containerd[1580]: time="2025-07-10T00:40:39.415013237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbsg4,Uid:d82186e1-710f-473a-b991-55b42ccd0785,Namespace:calico-system,Attempt:0,}" Jul 10 00:40:39.415943 containerd[1580]: time="2025-07-10T00:40:39.415734299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r72x5,Uid:ea45d8e8-7379-4561-8349-64b7edc81181,Namespace:kube-system,Attempt:0,}" Jul 10 00:40:39.521785 systemd-networkd[1458]: cali68889949361: Gained IPv6LL Jul 10 00:40:39.576973 systemd-networkd[1458]: cali6bf89ecd8a4: Link UP Jul 10 00:40:39.578833 systemd-networkd[1458]: cali6bf89ecd8a4: Gained carrier Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.460 [INFO][4707] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.476 [INFO][4707] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-eth0 coredns-674b8bbfcf- kube-system ea45d8e8-7379-4561-8349-64b7edc81181 802 0 2025-07-10 00:40:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-238-161-214 coredns-674b8bbfcf-r72x5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6bf89ecd8a4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-r72x5" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-" Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.476 [INFO][4707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-r72x5" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-eth0" Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.520 [INFO][4738] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" HandleID="k8s-pod-network.738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" Workload="172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-eth0" Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.520 [INFO][4738] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" HandleID="k8s-pod-network.738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" Workload="172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5840), Attrs:map[string]string{"namespace":"kube-system", "node":"172-238-161-214", "pod":"coredns-674b8bbfcf-r72x5", "timestamp":"2025-07-10 00:40:39.520158835 +0000 UTC"}, Hostname:"172-238-161-214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.520 [INFO][4738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.520 [INFO][4738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.520 [INFO][4738] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-214' Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.532 [INFO][4738] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" host="172-238-161-214" Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.538 [INFO][4738] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-161-214" Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.543 [INFO][4738] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.546 [INFO][4738] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.549 [INFO][4738] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.550 [INFO][4738] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" host="172-238-161-214" Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.551 [INFO][4738] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3 Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.556 [INFO][4738] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" host="172-238-161-214" Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.565 [INFO][4738] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.7/26] block=192.168.16.0/26 handle="k8s-pod-network.738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" host="172-238-161-214" Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.565 [INFO][4738] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.7/26] handle="k8s-pod-network.738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" host="172-238-161-214" Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.565 [INFO][4738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:40:39.601652 containerd[1580]: 2025-07-10 00:40:39.565 [INFO][4738] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.7/26] IPv6=[] ContainerID="738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" HandleID="k8s-pod-network.738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" Workload="172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-eth0" Jul 10 00:40:39.602129 containerd[1580]: 2025-07-10 00:40:39.571 [INFO][4707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-r72x5" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ea45d8e8-7379-4561-8349-64b7edc81181", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"", Pod:"coredns-674b8bbfcf-r72x5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6bf89ecd8a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:39.602129 containerd[1580]: 2025-07-10 00:40:39.572 [INFO][4707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.7/32] ContainerID="738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-r72x5" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-eth0" Jul 10 00:40:39.602129 containerd[1580]: 2025-07-10 00:40:39.572 [INFO][4707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6bf89ecd8a4 ContainerID="738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-r72x5" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-eth0" Jul 10 00:40:39.602129 containerd[1580]: 2025-07-10 00:40:39.580 [INFO][4707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-r72x5" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-eth0" Jul 10 00:40:39.602129 containerd[1580]: 2025-07-10 00:40:39.584 [INFO][4707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-r72x5" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ea45d8e8-7379-4561-8349-64b7edc81181", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3", Pod:"coredns-674b8bbfcf-r72x5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6bf89ecd8a4", MAC:"02:b8:72:83:d0:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:39.602129 containerd[1580]: 2025-07-10 00:40:39.594 [INFO][4707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-r72x5" WorkloadEndpoint="172--238--161--214-k8s-coredns--674b8bbfcf--r72x5-eth0" Jul 10 00:40:39.608487 kubelet[2714]: E0710 00:40:39.608467 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:39.608950 kubelet[2714]: I0710 00:40:39.608774 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:40:39.626725 containerd[1580]: time="2025-07-10T00:40:39.626676536Z" level=info msg="connecting to shim 738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3" address="unix:///run/containerd/s/0bede598f0c14dc92e93ee37a1927306780d18828bd53103af635c94513f3bf0" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:40:39.667929 systemd[1]: Started cri-containerd-738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3.scope - libcontainer container 738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3. Jul 10 00:40:39.705123 systemd-networkd[1458]: calicd923addcd7: Link UP Jul 10 00:40:39.706439 systemd-networkd[1458]: calicd923addcd7: Gained carrier Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.459 [INFO][4708] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.471 [INFO][4708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--161--214-k8s-csi--node--driver--nbsg4-eth0 csi-node-driver- calico-system d82186e1-710f-473a-b991-55b42ccd0785 706 0 2025-07-10 00:40:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-238-161-214 csi-node-driver-nbsg4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicd923addcd7 [] [] }} ContainerID="66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" Namespace="calico-system" Pod="csi-node-driver-nbsg4" WorkloadEndpoint="172--238--161--214-k8s-csi--node--driver--nbsg4-" Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.471 [INFO][4708] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" Namespace="calico-system" Pod="csi-node-driver-nbsg4" WorkloadEndpoint="172--238--161--214-k8s-csi--node--driver--nbsg4-eth0" Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.522 [INFO][4735] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" HandleID="k8s-pod-network.66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" Workload="172--238--161--214-k8s-csi--node--driver--nbsg4-eth0" Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.522 [INFO][4735] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" HandleID="k8s-pod-network.66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" Workload="172--238--161--214-k8s-csi--node--driver--nbsg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-161-214", "pod":"csi-node-driver-nbsg4", "timestamp":"2025-07-10 00:40:39.522183799 +0000 UTC"}, Hostname:"172-238-161-214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.524 [INFO][4735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.565 [INFO][4735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.565 [INFO][4735] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-161-214' Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.633 [INFO][4735] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" host="172-238-161-214" Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.643 [INFO][4735] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-161-214" Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.652 [INFO][4735] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.655 [INFO][4735] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.665 [INFO][4735] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="172-238-161-214" Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.668 [INFO][4735] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" host="172-238-161-214" Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.672 [INFO][4735] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.682 [INFO][4735] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" host="172-238-161-214" Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.689 [INFO][4735] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.8/26] block=192.168.16.0/26 handle="k8s-pod-network.66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" host="172-238-161-214" Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.689 [INFO][4735] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.8/26] handle="k8s-pod-network.66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" host="172-238-161-214" Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.690 [INFO][4735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:40:39.747484 containerd[1580]: 2025-07-10 00:40:39.690 [INFO][4735] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.8/26] IPv6=[] ContainerID="66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" HandleID="k8s-pod-network.66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" Workload="172--238--161--214-k8s-csi--node--driver--nbsg4-eth0" Jul 10 00:40:39.748663 containerd[1580]: 2025-07-10 00:40:39.698 [INFO][4708] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" Namespace="calico-system" Pod="csi-node-driver-nbsg4" WorkloadEndpoint="172--238--161--214-k8s-csi--node--driver--nbsg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-csi--node--driver--nbsg4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d82186e1-710f-473a-b991-55b42ccd0785", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"", Pod:"csi-node-driver-nbsg4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd923addcd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:39.748663 containerd[1580]: 2025-07-10 00:40:39.698 [INFO][4708] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.8/32] ContainerID="66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" Namespace="calico-system" Pod="csi-node-driver-nbsg4" WorkloadEndpoint="172--238--161--214-k8s-csi--node--driver--nbsg4-eth0" Jul 10 00:40:39.748663 containerd[1580]: 2025-07-10 00:40:39.698 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd923addcd7 ContainerID="66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" Namespace="calico-system" Pod="csi-node-driver-nbsg4" WorkloadEndpoint="172--238--161--214-k8s-csi--node--driver--nbsg4-eth0" Jul 10 00:40:39.748663 containerd[1580]: 2025-07-10 00:40:39.707 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" Namespace="calico-system" Pod="csi-node-driver-nbsg4" WorkloadEndpoint="172--238--161--214-k8s-csi--node--driver--nbsg4-eth0" Jul 10 00:40:39.748663 containerd[1580]: 2025-07-10 00:40:39.708 [INFO][4708] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" Namespace="calico-system" Pod="csi-node-driver-nbsg4" WorkloadEndpoint="172--238--161--214-k8s-csi--node--driver--nbsg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--161--214-k8s-csi--node--driver--nbsg4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d82186e1-710f-473a-b991-55b42ccd0785", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 40, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-161-214", ContainerID:"66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed", Pod:"csi-node-driver-nbsg4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd923addcd7", MAC:"e6:08:08:e7:f9:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:40:39.748663 containerd[1580]: 2025-07-10 00:40:39.721 [INFO][4708] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" Namespace="calico-system" Pod="csi-node-driver-nbsg4" WorkloadEndpoint="172--238--161--214-k8s-csi--node--driver--nbsg4-eth0" Jul 10 00:40:39.798392 containerd[1580]: time="2025-07-10T00:40:39.798353229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r72x5,Uid:ea45d8e8-7379-4561-8349-64b7edc81181,Namespace:kube-system,Attempt:0,} returns sandbox id \"738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3\"" Jul 10 00:40:39.803308 kubelet[2714]: E0710 00:40:39.803216 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:39.810079 containerd[1580]: time="2025-07-10T00:40:39.810058541Z" level=info msg="CreateContainer within sandbox \"738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:40:39.820028 containerd[1580]: time="2025-07-10T00:40:39.819943510Z" level=info msg="connecting to shim 66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed" address="unix:///run/containerd/s/85d49df97afe182f9d5d3a3dc04167bdd17d90c4a4cda41393f501dd705609c1" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:40:39.834450 containerd[1580]: time="2025-07-10T00:40:39.830957831Z" level=info msg="Container 2680f79c63fc929b3374d6adc83ede88e4dfce9913757978b401779b0c3c1a2a: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:39.836062 containerd[1580]: time="2025-07-10T00:40:39.836009840Z" level=info msg="CreateContainer within sandbox \"738a73f4adbdfa69a74a2ffb0ae68e83767dc93792754b511d21052f951bf1f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2680f79c63fc929b3374d6adc83ede88e4dfce9913757978b401779b0c3c1a2a\"" Jul 10 00:40:39.836864 containerd[1580]: time="2025-07-10T00:40:39.836835732Z" level=info msg="StartContainer for \"2680f79c63fc929b3374d6adc83ede88e4dfce9913757978b401779b0c3c1a2a\"" Jul 10 00:40:39.839655 containerd[1580]: time="2025-07-10T00:40:39.839354846Z" level=info msg="connecting to shim 2680f79c63fc929b3374d6adc83ede88e4dfce9913757978b401779b0c3c1a2a" address="unix:///run/containerd/s/0bede598f0c14dc92e93ee37a1927306780d18828bd53103af635c94513f3bf0" protocol=ttrpc version=3 Jul 10 00:40:39.869761 systemd[1]: Started cri-containerd-2680f79c63fc929b3374d6adc83ede88e4dfce9913757978b401779b0c3c1a2a.scope - libcontainer container 2680f79c63fc929b3374d6adc83ede88e4dfce9913757978b401779b0c3c1a2a. Jul 10 00:40:39.879894 systemd[1]: Started cri-containerd-66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed.scope - libcontainer container 66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed. Jul 10 00:40:39.959575 containerd[1580]: time="2025-07-10T00:40:39.959456732Z" level=info msg="StartContainer for \"2680f79c63fc929b3374d6adc83ede88e4dfce9913757978b401779b0c3c1a2a\" returns successfully" Jul 10 00:40:39.984483 containerd[1580]: time="2025-07-10T00:40:39.984385329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbsg4,Uid:d82186e1-710f-473a-b991-55b42ccd0785,Namespace:calico-system,Attempt:0,} returns sandbox id \"66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed\"" Jul 10 00:40:40.033865 systemd-networkd[1458]: calif7512d62ad2: Gained IPv6LL Jul 10 00:40:40.063482 containerd[1580]: time="2025-07-10T00:40:40.063456315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:40.064482 containerd[1580]: time="2025-07-10T00:40:40.064459207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 10 00:40:40.065916 containerd[1580]: time="2025-07-10T00:40:40.065092398Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:40.066910 containerd[1580]: time="2025-07-10T00:40:40.066891882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:40.068179 containerd[1580]: time="2025-07-10T00:40:40.068161244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.332778792s" Jul 10 00:40:40.068260 containerd[1580]: time="2025-07-10T00:40:40.068246504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 10 00:40:40.069905 containerd[1580]: time="2025-07-10T00:40:40.069891877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:40:40.073198 containerd[1580]: time="2025-07-10T00:40:40.073163893Z" level=info msg="CreateContainer within sandbox \"6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 00:40:40.077869 containerd[1580]: time="2025-07-10T00:40:40.077850352Z" level=info msg="Container bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:40.094073 containerd[1580]: time="2025-07-10T00:40:40.094054441Z" level=info msg="CreateContainer within sandbox \"6e464a0eeb657220df712c82f7413955604de6e68ad7893ea24246c096dacd46\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\"" Jul 10 00:40:40.094600 containerd[1580]: time="2025-07-10T00:40:40.094578202Z" level=info msg="StartContainer for \"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\"" Jul 10 00:40:40.096045 containerd[1580]: time="2025-07-10T00:40:40.095519863Z" level=info msg="connecting to shim bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b" address="unix:///run/containerd/s/162256716e0603de3bb3952fd9f4a8e1ba91c80892d7e68c894995163a87945f" protocol=ttrpc version=3 Jul 10 00:40:40.140753 systemd[1]: Started cri-containerd-bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b.scope - libcontainer container bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b. Jul 10 00:40:40.220843 containerd[1580]: time="2025-07-10T00:40:40.220803302Z" level=info msg="StartContainer for \"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" returns successfully" Jul 10 00:40:40.372726 systemd-networkd[1458]: vxlan.calico: Link UP Jul 10 00:40:40.372738 systemd-networkd[1458]: vxlan.calico: Gained carrier Jul 10 00:40:40.614681 kubelet[2714]: E0710 00:40:40.614588 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:40.638909 kubelet[2714]: I0710 00:40:40.638359 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r72x5" podStartSLOduration=32.638345506 podStartE2EDuration="32.638345506s" podCreationTimestamp="2025-07-10 00:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:40:40.636902103 +0000 UTC m=+38.308911661" watchObservedRunningTime="2025-07-10 00:40:40.638345506 +0000 UTC m=+38.310355054" Jul 10 00:40:40.767399 containerd[1580]: time="2025-07-10T00:40:40.767354622Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"af0161754d1289cbb4add11ce50bc87141177d81d059d428cc88693acbf2d8b9\" pid:5040 exit_status:1 exited_at:{seconds:1752108040 nanos:766712090}" Jul 10 00:40:40.865855 systemd-networkd[1458]: calicd923addcd7: Gained IPv6LL Jul 10 00:40:41.249912 systemd-networkd[1458]: cali6bf89ecd8a4: Gained IPv6LL Jul 10 00:40:41.628360 kubelet[2714]: E0710 00:40:41.628247 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:41.706190 containerd[1580]: time="2025-07-10T00:40:41.706120281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"f5565ac28753e848c90f73590600561b85a828a551814b8b0c5ee8ce2b7bf4f2\" pid:5087 exit_status:1 exited_at:{seconds:1752108041 nanos:705560881}" Jul 10 00:40:41.955433 containerd[1580]: time="2025-07-10T00:40:41.955312935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:41.956275 containerd[1580]: time="2025-07-10T00:40:41.956137275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 10 00:40:41.956727 containerd[1580]: time="2025-07-10T00:40:41.956696497Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:41.958174 containerd[1580]: time="2025-07-10T00:40:41.958141609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:41.958819 containerd[1580]: time="2025-07-10T00:40:41.958797451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.888820694s" Jul 10 00:40:41.958892 containerd[1580]: time="2025-07-10T00:40:41.958879491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:40:41.960732 containerd[1580]: time="2025-07-10T00:40:41.960670264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:40:41.962922 containerd[1580]: time="2025-07-10T00:40:41.962895068Z" level=info msg="CreateContainer within sandbox \"002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:40:41.972648 containerd[1580]: time="2025-07-10T00:40:41.970826772Z" level=info msg="Container 58111965089e90a46d34d695a151923d06e8dea08a68a86b070693b0c5741dd3: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:41.973956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127934793.mount: Deactivated successfully. Jul 10 00:40:41.982556 containerd[1580]: time="2025-07-10T00:40:41.982529543Z" level=info msg="CreateContainer within sandbox \"002a33071ee650698286029999922d91f401204e98e75261729f5c23b948ce7b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"58111965089e90a46d34d695a151923d06e8dea08a68a86b070693b0c5741dd3\"" Jul 10 00:40:41.983849 containerd[1580]: time="2025-07-10T00:40:41.983813745Z" level=info msg="StartContainer for \"58111965089e90a46d34d695a151923d06e8dea08a68a86b070693b0c5741dd3\"" Jul 10 00:40:41.985231 containerd[1580]: time="2025-07-10T00:40:41.985208147Z" level=info msg="connecting to shim 58111965089e90a46d34d695a151923d06e8dea08a68a86b070693b0c5741dd3" address="unix:///run/containerd/s/ceb88f7a2b7f9e5cee6e5e21d6f39f8c94f481684b9431b3b6c81516feb5a7c3" protocol=ttrpc version=3 Jul 10 00:40:42.008762 systemd[1]: Started cri-containerd-58111965089e90a46d34d695a151923d06e8dea08a68a86b070693b0c5741dd3.scope - libcontainer container 58111965089e90a46d34d695a151923d06e8dea08a68a86b070693b0c5741dd3. Jul 10 00:40:42.055570 containerd[1580]: time="2025-07-10T00:40:42.055531900Z" level=info msg="StartContainer for \"58111965089e90a46d34d695a151923d06e8dea08a68a86b070693b0c5741dd3\" returns successfully" Jul 10 00:40:42.134094 containerd[1580]: time="2025-07-10T00:40:42.134054105Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:42.135374 containerd[1580]: time="2025-07-10T00:40:42.135345168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 00:40:42.136806 containerd[1580]: time="2025-07-10T00:40:42.136776430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 176.084756ms" Jul 10 00:40:42.136846 containerd[1580]: time="2025-07-10T00:40:42.136823370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:40:42.139156 containerd[1580]: time="2025-07-10T00:40:42.139103494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 00:40:42.140822 containerd[1580]: time="2025-07-10T00:40:42.140512917Z" level=info msg="CreateContainer within sandbox \"c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:40:42.145885 systemd-networkd[1458]: vxlan.calico: Gained IPv6LL Jul 10 00:40:42.150659 containerd[1580]: time="2025-07-10T00:40:42.149244271Z" level=info msg="Container cbfcabeaaa736bde9868754654a9740504f9e3bd56272916a64dcfe096b5ac85: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:42.163425 containerd[1580]: time="2025-07-10T00:40:42.163402937Z" level=info msg="CreateContainer within sandbox \"c5e9d9dd6ce51b06862c41afd5855c8e2e701194d5ebd5f390ac16ac910bd2a4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cbfcabeaaa736bde9868754654a9740504f9e3bd56272916a64dcfe096b5ac85\"" Jul 10 00:40:42.164117 containerd[1580]: time="2025-07-10T00:40:42.164099397Z" level=info msg="StartContainer for \"cbfcabeaaa736bde9868754654a9740504f9e3bd56272916a64dcfe096b5ac85\"" Jul 10 00:40:42.165285 containerd[1580]: time="2025-07-10T00:40:42.165266190Z" level=info msg="connecting to shim cbfcabeaaa736bde9868754654a9740504f9e3bd56272916a64dcfe096b5ac85" address="unix:///run/containerd/s/65909ecd230a78a9602354fbc6377ac208779847f0989e5b49b708ea38931115" protocol=ttrpc version=3 Jul 10 00:40:42.193079 systemd[1]: Started cri-containerd-cbfcabeaaa736bde9868754654a9740504f9e3bd56272916a64dcfe096b5ac85.scope - libcontainer container cbfcabeaaa736bde9868754654a9740504f9e3bd56272916a64dcfe096b5ac85. Jul 10 00:40:42.263252 containerd[1580]: time="2025-07-10T00:40:42.263178979Z" level=info msg="StartContainer for \"cbfcabeaaa736bde9868754654a9740504f9e3bd56272916a64dcfe096b5ac85\" returns successfully" Jul 10 00:40:42.643369 kubelet[2714]: E0710 00:40:42.642987 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:40:42.654471 kubelet[2714]: I0710 00:40:42.654397 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-fjcsh" podStartSLOduration=21.267861458 podStartE2EDuration="24.654279604s" podCreationTimestamp="2025-07-10 00:40:18 +0000 UTC" firstStartedPulling="2025-07-10 00:40:36.683372811 +0000 UTC m=+34.355382359" lastFinishedPulling="2025-07-10 00:40:40.069790957 +0000 UTC m=+37.741800505" observedRunningTime="2025-07-10 00:40:40.671792957 +0000 UTC m=+38.343802515" watchObservedRunningTime="2025-07-10 00:40:42.654279604 +0000 UTC m=+40.326289152" Jul 10 00:40:42.690735 kubelet[2714]: I0710 00:40:42.690082 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84597f7d9f-264zs" podStartSLOduration=22.451796818 podStartE2EDuration="26.690068226s" podCreationTimestamp="2025-07-10 00:40:16 +0000 UTC" firstStartedPulling="2025-07-10 00:40:37.721369144 +0000 UTC m=+35.393378692" lastFinishedPulling="2025-07-10 00:40:41.959640552 +0000 UTC m=+39.631650100" observedRunningTime="2025-07-10 00:40:42.661757967 +0000 UTC m=+40.333767515" watchObservedRunningTime="2025-07-10 00:40:42.690068226 +0000 UTC m=+40.362077774" Jul 10 00:40:42.855177 containerd[1580]: time="2025-07-10T00:40:42.855134221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"c9a374c60f116f746cbfa9fa26e43f2cd9ed3f9b9bd2b826d9c33468cbd04f46\" pid:5190 exit_status:1 exited_at:{seconds:1752108042 nanos:853927009}" Jul 10 00:40:43.031405 containerd[1580]: time="2025-07-10T00:40:43.031354384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:43.033954 containerd[1580]: time="2025-07-10T00:40:43.033905439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 10 00:40:43.041342 containerd[1580]: time="2025-07-10T00:40:43.041296871Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:43.046854 containerd[1580]: time="2025-07-10T00:40:43.046794681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:43.049243 containerd[1580]: time="2025-07-10T00:40:43.049141984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 910.01149ms" Jul 10 00:40:43.049243 containerd[1580]: time="2025-07-10T00:40:43.049188514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 10 00:40:43.064771 containerd[1580]: time="2025-07-10T00:40:43.064749281Z" level=info msg="CreateContainer within sandbox \"66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 00:40:43.077653 containerd[1580]: time="2025-07-10T00:40:43.076412600Z" level=info msg="Container cb34d5a01d67fa6cb003e4eca8e53cfe962c78217fbef0a503582f031aa2a218: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:43.087644 containerd[1580]: time="2025-07-10T00:40:43.087392258Z" level=info msg="CreateContainer within sandbox \"66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cb34d5a01d67fa6cb003e4eca8e53cfe962c78217fbef0a503582f031aa2a218\"" Jul 10 00:40:43.088645 containerd[1580]: time="2025-07-10T00:40:43.088478001Z" level=info msg="StartContainer for \"cb34d5a01d67fa6cb003e4eca8e53cfe962c78217fbef0a503582f031aa2a218\"" Jul 10 00:40:43.091167 containerd[1580]: time="2025-07-10T00:40:43.091147155Z" level=info msg="connecting to shim cb34d5a01d67fa6cb003e4eca8e53cfe962c78217fbef0a503582f031aa2a218" address="unix:///run/containerd/s/85d49df97afe182f9d5d3a3dc04167bdd17d90c4a4cda41393f501dd705609c1" protocol=ttrpc version=3 Jul 10 00:40:43.129103 systemd[1]: Started cri-containerd-cb34d5a01d67fa6cb003e4eca8e53cfe962c78217fbef0a503582f031aa2a218.scope - libcontainer container cb34d5a01d67fa6cb003e4eca8e53cfe962c78217fbef0a503582f031aa2a218. Jul 10 00:40:43.213426 containerd[1580]: time="2025-07-10T00:40:43.213378621Z" level=info msg="StartContainer for \"cb34d5a01d67fa6cb003e4eca8e53cfe962c78217fbef0a503582f031aa2a218\" returns successfully" Jul 10 00:40:43.216026 containerd[1580]: time="2025-07-10T00:40:43.215975215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 00:40:43.646148 kubelet[2714]: I0710 00:40:43.646108 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:40:43.646552 kubelet[2714]: I0710 00:40:43.646484 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:40:44.261217 containerd[1580]: time="2025-07-10T00:40:44.260644121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:44.261598 containerd[1580]: time="2025-07-10T00:40:44.261288762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 10 00:40:44.262023 containerd[1580]: time="2025-07-10T00:40:44.261985514Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:44.263147 containerd[1580]: time="2025-07-10T00:40:44.263106715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:44.263753 containerd[1580]: time="2025-07-10T00:40:44.263718056Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.047719011s" Jul 10 00:40:44.263805 containerd[1580]: time="2025-07-10T00:40:44.263753567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 10 00:40:44.267769 containerd[1580]: time="2025-07-10T00:40:44.267735493Z" level=info msg="CreateContainer within sandbox \"66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 00:40:44.276999 containerd[1580]: time="2025-07-10T00:40:44.276958328Z" level=info msg="Container f4a3263f62ed2688b6ed66cfcb10fcaf4b7ca194e56cf3618e8a196470ac9cbd: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:40:44.282352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2964342965.mount: Deactivated successfully. Jul 10 00:40:44.286350 containerd[1580]: time="2025-07-10T00:40:44.286312413Z" level=info msg="CreateContainer within sandbox \"66e9254505d57ca4395bb755bb8e117026638b38fe09a182906d9adc1d2d9eed\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f4a3263f62ed2688b6ed66cfcb10fcaf4b7ca194e56cf3618e8a196470ac9cbd\"" Jul 10 00:40:44.286802 containerd[1580]: time="2025-07-10T00:40:44.286754195Z" level=info msg="StartContainer for \"f4a3263f62ed2688b6ed66cfcb10fcaf4b7ca194e56cf3618e8a196470ac9cbd\"" Jul 10 00:40:44.288003 containerd[1580]: time="2025-07-10T00:40:44.287961836Z" level=info msg="connecting to shim f4a3263f62ed2688b6ed66cfcb10fcaf4b7ca194e56cf3618e8a196470ac9cbd" address="unix:///run/containerd/s/85d49df97afe182f9d5d3a3dc04167bdd17d90c4a4cda41393f501dd705609c1" protocol=ttrpc version=3 Jul 10 00:40:44.316840 systemd[1]: Started cri-containerd-f4a3263f62ed2688b6ed66cfcb10fcaf4b7ca194e56cf3618e8a196470ac9cbd.scope - libcontainer container f4a3263f62ed2688b6ed66cfcb10fcaf4b7ca194e56cf3618e8a196470ac9cbd. Jul 10 00:40:44.354789 containerd[1580]: time="2025-07-10T00:40:44.354743865Z" level=info msg="StartContainer for \"f4a3263f62ed2688b6ed66cfcb10fcaf4b7ca194e56cf3618e8a196470ac9cbd\" returns successfully" Jul 10 00:40:44.377832 kubelet[2714]: I0710 00:40:44.377779 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84597f7d9f-fzrmq" podStartSLOduration=24.998759129 podStartE2EDuration="28.377765234s" podCreationTimestamp="2025-07-10 00:40:16 +0000 UTC" firstStartedPulling="2025-07-10 00:40:38.758558637 +0000 UTC m=+36.430568185" lastFinishedPulling="2025-07-10 00:40:42.137564742 +0000 UTC m=+39.809574290" observedRunningTime="2025-07-10 00:40:42.690253316 +0000 UTC m=+40.362262864" watchObservedRunningTime="2025-07-10 00:40:44.377765234 +0000 UTC m=+42.049774782" Jul 10 00:40:44.484440 kubelet[2714]: I0710 00:40:44.484414 2714 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 00:40:44.485769 kubelet[2714]: I0710 00:40:44.485745 2714 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 00:40:44.663069 kubelet[2714]: I0710 00:40:44.662761 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nbsg4" podStartSLOduration=22.385682928 podStartE2EDuration="26.66274596s" podCreationTimestamp="2025-07-10 00:40:18 +0000 UTC" firstStartedPulling="2025-07-10 00:40:39.987508395 +0000 UTC m=+37.659517943" lastFinishedPulling="2025-07-10 00:40:44.264571427 +0000 UTC m=+41.936580975" observedRunningTime="2025-07-10 00:40:44.66216218 +0000 UTC m=+42.334171738" watchObservedRunningTime="2025-07-10 00:40:44.66274596 +0000 UTC m=+42.334755508" Jul 10 00:40:47.777400 kubelet[2714]: I0710 00:40:47.777085 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:40:47.814186 containerd[1580]: time="2025-07-10T00:40:47.814061038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" id:\"2352d73d7c62fccc68b51ef1c7b92e1924df67ea2505032e7d323a109aa576fb\" pid:5297 exited_at:{seconds:1752108047 nanos:813855888}" Jul 10 00:40:47.861892 containerd[1580]: time="2025-07-10T00:40:47.861820621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" id:\"29717d3c806a3f435dc330c9cc85632121ad6dabab0f1388ad9922cc455c6cae\" pid:5318 exited_at:{seconds:1752108047 nanos:861507190}" Jul 10 00:40:51.381949 containerd[1580]: time="2025-07-10T00:40:51.381871773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"df095b76b9000ec22f45d199b005e865a4916a1ab7b9df2c40de867d2e31f35c\" pid:5348 exited_at:{seconds:1752108051 nanos:381678162}" Jul 10 00:41:01.924070 containerd[1580]: time="2025-07-10T00:41:01.923926794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357\" id:\"e87c3e0a5913a8022a9229c49d5c7e06cb67b0b25387b806a3370edd6d5802a2\" pid:5380 exited_at:{seconds:1752108061 nanos:923464723}" Jul 10 00:41:09.372866 systemd[1]: Started sshd@7-172.238.161.214:22-195.178.110.160:45946.service - OpenSSH per-connection server daemon (195.178.110.160:45946). Jul 10 00:41:09.893886 sshd[5400]: Connection closed by authenticating user root 195.178.110.160 port 45946 [preauth] Jul 10 00:41:09.896667 systemd[1]: sshd@7-172.238.161.214:22-195.178.110.160:45946.service: Deactivated successfully. Jul 10 00:41:10.001114 systemd[1]: Started sshd@8-172.238.161.214:22-195.178.110.160:45962.service - OpenSSH per-connection server daemon (195.178.110.160:45962). Jul 10 00:41:10.508444 sshd[5406]: Connection closed by authenticating user root 195.178.110.160 port 45962 [preauth] Jul 10 00:41:10.511965 systemd[1]: sshd@8-172.238.161.214:22-195.178.110.160:45962.service: Deactivated successfully. Jul 10 00:41:10.627189 systemd[1]: Started sshd@9-172.238.161.214:22-195.178.110.160:36522.service - OpenSSH per-connection server daemon (195.178.110.160:36522). Jul 10 00:41:11.139865 sshd[5411]: Connection closed by authenticating user root 195.178.110.160 port 36522 [preauth] Jul 10 00:41:11.142281 systemd[1]: sshd@9-172.238.161.214:22-195.178.110.160:36522.service: Deactivated successfully. Jul 10 00:41:11.249428 systemd[1]: Started sshd@10-172.238.161.214:22-195.178.110.160:36534.service - OpenSSH per-connection server daemon (195.178.110.160:36534). Jul 10 00:41:11.770045 sshd[5416]: Connection closed by authenticating user root 195.178.110.160 port 36534 [preauth] Jul 10 00:41:11.771774 systemd[1]: sshd@10-172.238.161.214:22-195.178.110.160:36534.service: Deactivated successfully. Jul 10 00:41:11.876820 systemd[1]: Started sshd@11-172.238.161.214:22-195.178.110.160:36544.service - OpenSSH per-connection server daemon (195.178.110.160:36544). Jul 10 00:41:12.396818 sshd[5421]: Connection closed by authenticating user root 195.178.110.160 port 36544 [preauth] Jul 10 00:41:12.399385 systemd[1]: sshd@11-172.238.161.214:22-195.178.110.160:36544.service: Deactivated successfully. Jul 10 00:41:12.722423 containerd[1580]: time="2025-07-10T00:41:12.722384366Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"62b03643c8afe01b3d49073eb18ad53152a1443d8e02bc4d6d2d053089a40078\" pid:5437 exited_at:{seconds:1752108072 nanos:721966058}" Jul 10 00:41:14.701860 kubelet[2714]: I0710 00:41:14.701787 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:41:17.414593 kubelet[2714]: E0710 00:41:17.414554 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:41:17.859616 containerd[1580]: time="2025-07-10T00:41:17.859438432Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" id:\"c1f4ff4c930a5f8beb5f6787ad5682a4944cd81ac20fc4b1ef3fc41a4b449eef\" pid:5463 exited_at:{seconds:1752108077 nanos:858822985}" Jul 10 00:41:19.414197 kubelet[2714]: E0710 00:41:19.414105 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:41:21.414651 kubelet[2714]: E0710 00:41:21.414418 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:41:26.485818 containerd[1580]: time="2025-07-10T00:41:26.485764220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" id:\"a46f99a8e0315d98cbf0276bd57b6d707870ef0439edcb735142c4952a39869b\" pid:5493 exited_at:{seconds:1752108086 nanos:484335477}" Jul 10 00:41:31.937448 containerd[1580]: time="2025-07-10T00:41:31.937408706Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357\" id:\"3a19e6d34924d367e9e69c9907cce927a93dcde96315051e3810c9d35e36763c\" pid:5515 exited_at:{seconds:1752108091 nanos:936942947}" Jul 10 00:41:41.413948 kubelet[2714]: E0710 00:41:41.413902 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:41:42.726373 containerd[1580]: time="2025-07-10T00:41:42.726327254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"8291ed419f62a03172761287359c586a1429477071e0a973668aa64376bd3a43\" pid:5542 exited_at:{seconds:1752108102 nanos:725957715}" Jul 10 00:41:45.415242 kubelet[2714]: E0710 00:41:45.414169 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:41:47.414521 kubelet[2714]: E0710 00:41:47.414488 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:41:47.862030 containerd[1580]: time="2025-07-10T00:41:47.861991054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" id:\"578efa6726140c12590e78ffee7b4f35bafb915880575ab46286663cddc06433\" pid:5563 exited_at:{seconds:1752108107 nanos:860986456}" Jul 10 00:41:51.386194 containerd[1580]: time="2025-07-10T00:41:51.386141140Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"d136b0278f306db4debfd0ac0c464133dbec9087cfe0ecaebab453b66aa23d7c\" pid:5585 exited_at:{seconds:1752108111 nanos:385757542}" Jul 10 00:41:54.415252 kubelet[2714]: E0710 00:41:54.414658 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:42:01.919501 containerd[1580]: time="2025-07-10T00:42:01.919424724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357\" id:\"fb65ad27a24508a5d3157292f3df81723f248a486e1644620b921ed5e2f87f3e\" pid:5615 exited_at:{seconds:1752108121 nanos:918739535}" Jul 10 00:42:12.709105 containerd[1580]: time="2025-07-10T00:42:12.709042004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"dac84db3b68541d97971f24dbc5cd3ebdec925790bcd1c7d3b1180a28bda70ee\" pid:5643 exited_at:{seconds:1752108132 nanos:708701015}" Jul 10 00:42:17.854580 containerd[1580]: time="2025-07-10T00:42:17.854510530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" id:\"819a11a4fd4b516281bda06c6dd9d3141f53bae0a46c3df232da9ccfe952a2d3\" pid:5673 exited_at:{seconds:1752108137 nanos:853997461}" Jul 10 00:42:21.414195 kubelet[2714]: E0710 00:42:21.414159 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:42:22.414899 kubelet[2714]: E0710 00:42:22.414199 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:42:26.480953 containerd[1580]: time="2025-07-10T00:42:26.480915436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" id:\"3cd15a3c70aa0b14cc01b831cb972755ef88a8c1867841a31d88093b760d3c32\" pid:5710 exited_at:{seconds:1752108146 nanos:480735357}" Jul 10 00:42:31.934705 containerd[1580]: time="2025-07-10T00:42:31.934594973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357\" id:\"4e685c2fe3e9a85bb45b5411cadb77280ba9500d1f4019ae13416f2ea382d91d\" pid:5732 exited_at:{seconds:1752108151 nanos:934164134}" Jul 10 00:42:34.414665 kubelet[2714]: E0710 00:42:34.414094 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:42:36.334654 update_engine[1554]: I20250710 00:42:36.334585 1554 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 10 00:42:36.335046 update_engine[1554]: I20250710 00:42:36.334664 1554 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 10 00:42:36.335046 update_engine[1554]: I20250710 00:42:36.334894 1554 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 10 00:42:36.336838 update_engine[1554]: I20250710 00:42:36.336777 1554 omaha_request_params.cc:62] Current group set to beta Jul 10 00:42:36.337032 update_engine[1554]: I20250710 00:42:36.336931 1554 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 10 00:42:36.337032 update_engine[1554]: I20250710 00:42:36.336947 1554 update_attempter.cc:643] Scheduling an action processor start. Jul 10 00:42:36.337032 update_engine[1554]: I20250710 00:42:36.336967 1554 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 10 00:42:36.340294 update_engine[1554]: I20250710 00:42:36.340271 1554 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 10 00:42:36.340418 update_engine[1554]: I20250710 00:42:36.340399 1554 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 10 00:42:36.341344 locksmithd[1583]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 10 00:42:36.341704 update_engine[1554]: I20250710 00:42:36.341368 1554 omaha_request_action.cc:272] Request: Jul 10 00:42:36.341704 update_engine[1554]: Jul 10 00:42:36.341704 update_engine[1554]: Jul 10 00:42:36.341704 update_engine[1554]: Jul 10 00:42:36.341704 update_engine[1554]: Jul 10 00:42:36.341704 update_engine[1554]: Jul 10 00:42:36.341704 update_engine[1554]: Jul 10 00:42:36.341704 update_engine[1554]: Jul 10 00:42:36.341704 update_engine[1554]: Jul 10 00:42:36.341704 update_engine[1554]: I20250710 00:42:36.341382 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:42:36.345202 update_engine[1554]: I20250710 00:42:36.345165 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:42:36.345517 update_engine[1554]: I20250710 00:42:36.345479 1554 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:42:36.357138 update_engine[1554]: E20250710 00:42:36.357096 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:42:36.357204 update_engine[1554]: I20250710 00:42:36.357189 1554 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 10 00:42:42.704798 containerd[1580]: time="2025-07-10T00:42:42.704749571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"60f4f52bad238f59cb6e5d7209e2b4b5baecd8a36f01515febd939ccb58775d4\" pid:5760 exited_at:{seconds:1752108162 nanos:704425571}" Jul 10 00:42:46.247125 update_engine[1554]: I20250710 00:42:46.247061 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:42:46.247568 update_engine[1554]: I20250710 00:42:46.247316 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:42:46.247568 update_engine[1554]: I20250710 00:42:46.247535 1554 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:42:46.248432 update_engine[1554]: E20250710 00:42:46.248399 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:42:46.248474 update_engine[1554]: I20250710 00:42:46.248448 1554 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 10 00:42:47.859074 containerd[1580]: time="2025-07-10T00:42:47.859030469Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" id:\"6d0b0aade1fdb92b115b1a4c09fde6c19a3ae731026abb615c970d2f5d6e7bde\" pid:5783 exited_at:{seconds:1752108167 nanos:858243239}" Jul 10 00:42:51.383674 containerd[1580]: time="2025-07-10T00:42:51.383603180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"e98172dbfa62ccded02937978494b1179d9bc3fdbdcbb15d8306a020486f4b46\" pid:5805 exited_at:{seconds:1752108171 nanos:383358390}" Jul 10 00:42:55.414457 kubelet[2714]: E0710 00:42:55.414430 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:42:56.243971 update_engine[1554]: I20250710 00:42:56.243906 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:42:56.244396 update_engine[1554]: I20250710 00:42:56.244151 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:42:56.244396 update_engine[1554]: I20250710 00:42:56.244371 1554 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:42:56.245269 update_engine[1554]: E20250710 00:42:56.245230 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:42:56.245302 update_engine[1554]: I20250710 00:42:56.245282 1554 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 10 00:43:00.414298 kubelet[2714]: E0710 00:43:00.413643 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:43:01.924309 containerd[1580]: time="2025-07-10T00:43:01.924269125Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357\" id:\"6f87b3ec6a9c8084672cceaa995ee3bbe51a1d34c46ba4c16195795b143d549a\" pid:5827 exited_at:{seconds:1752108181 nanos:923843986}" Jul 10 00:43:06.246755 update_engine[1554]: I20250710 00:43:06.246688 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:43:06.247145 update_engine[1554]: I20250710 00:43:06.246941 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:43:06.247182 update_engine[1554]: I20250710 00:43:06.247151 1554 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:43:06.247992 update_engine[1554]: E20250710 00:43:06.247934 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:43:06.248124 update_engine[1554]: I20250710 00:43:06.248008 1554 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 10 00:43:06.248124 update_engine[1554]: I20250710 00:43:06.248016 1554 omaha_request_action.cc:617] Omaha request response: Jul 10 00:43:06.248124 update_engine[1554]: E20250710 00:43:06.248095 1554 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 10 00:43:06.248124 update_engine[1554]: I20250710 00:43:06.248116 1554 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 10 00:43:06.248124 update_engine[1554]: I20250710 00:43:06.248122 1554 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 00:43:06.248230 update_engine[1554]: I20250710 00:43:06.248128 1554 update_attempter.cc:306] Processing Done. Jul 10 00:43:06.248230 update_engine[1554]: E20250710 00:43:06.248144 1554 update_attempter.cc:619] Update failed. Jul 10 00:43:06.248230 update_engine[1554]: I20250710 00:43:06.248150 1554 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 10 00:43:06.248230 update_engine[1554]: I20250710 00:43:06.248157 1554 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 10 00:43:06.248230 update_engine[1554]: I20250710 00:43:06.248162 1554 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 10 00:43:06.248501 update_engine[1554]: I20250710 00:43:06.248384 1554 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 10 00:43:06.248501 update_engine[1554]: I20250710 00:43:06.248417 1554 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 10 00:43:06.248501 update_engine[1554]: I20250710 00:43:06.248423 1554 omaha_request_action.cc:272] Request: Jul 10 00:43:06.248501 update_engine[1554]: Jul 10 00:43:06.248501 update_engine[1554]: Jul 10 00:43:06.248501 update_engine[1554]: Jul 10 00:43:06.248501 update_engine[1554]: Jul 10 00:43:06.248501 update_engine[1554]: Jul 10 00:43:06.248501 update_engine[1554]: Jul 10 00:43:06.248501 update_engine[1554]: I20250710 00:43:06.248430 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:43:06.248718 locksmithd[1583]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 10 00:43:06.248998 update_engine[1554]: I20250710 00:43:06.248664 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:43:06.248998 update_engine[1554]: I20250710 00:43:06.248871 1554 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:43:06.249836 update_engine[1554]: E20250710 00:43:06.249804 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:43:06.249895 update_engine[1554]: I20250710 00:43:06.249867 1554 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 10 00:43:06.249895 update_engine[1554]: I20250710 00:43:06.249876 1554 omaha_request_action.cc:617] Omaha request response: Jul 10 00:43:06.249895 update_engine[1554]: I20250710 00:43:06.249883 1554 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 00:43:06.249895 update_engine[1554]: I20250710 00:43:06.249890 1554 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 00:43:06.249968 update_engine[1554]: I20250710 00:43:06.249897 1554 update_attempter.cc:306] Processing Done. Jul 10 00:43:06.249968 update_engine[1554]: I20250710 00:43:06.249904 1554 update_attempter.cc:310] Error event sent. Jul 10 00:43:06.249968 update_engine[1554]: I20250710 00:43:06.249912 1554 update_check_scheduler.cc:74] Next update check in 41m40s Jul 10 00:43:06.250307 locksmithd[1583]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 10 00:43:10.414862 kubelet[2714]: E0710 00:43:10.414324 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:43:12.705983 containerd[1580]: time="2025-07-10T00:43:12.705929115Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"1a067739e23bf767d0ba0bf42984c92df1482a4edc8024f7986ede0ce7f340f3\" pid:5854 exited_at:{seconds:1752108192 nanos:705472505}" Jul 10 00:43:17.414334 kubelet[2714]: E0710 00:43:17.414301 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:43:17.860387 containerd[1580]: time="2025-07-10T00:43:17.860341184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" id:\"1a1ddc2493170d0849dd02e7785dbb19ac8e77b75f4911f0d6adc0725f1884a7\" pid:5877 exited_at:{seconds:1752108197 nanos:860103534}" Jul 10 00:43:21.692128 systemd[1]: Started sshd@12-172.238.161.214:22-139.178.89.65:53208.service - OpenSSH per-connection server daemon (139.178.89.65:53208). Jul 10 00:43:22.027090 sshd[5898]: Accepted publickey for core from 139.178.89.65 port 53208 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:43:22.028986 sshd-session[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:43:22.033942 systemd-logind[1548]: New session 8 of user core. Jul 10 00:43:22.038820 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:43:22.334825 sshd[5900]: Connection closed by 139.178.89.65 port 53208 Jul 10 00:43:22.335705 sshd-session[5898]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:22.339799 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:43:22.340683 systemd[1]: sshd@12-172.238.161.214:22-139.178.89.65:53208.service: Deactivated successfully. Jul 10 00:43:22.343322 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:43:22.345067 systemd-logind[1548]: Removed session 8. Jul 10 00:43:26.479336 containerd[1580]: time="2025-07-10T00:43:26.479264981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" id:\"59c686ded3c8d3896a29ab8a704f293976d68860b5948fc2c588a1ac55987bdf\" pid:5927 exited_at:{seconds:1752108206 nanos:479080910}" Jul 10 00:43:27.400691 systemd[1]: Started sshd@13-172.238.161.214:22-139.178.89.65:53210.service - OpenSSH per-connection server daemon (139.178.89.65:53210). Jul 10 00:43:27.745399 sshd[5937]: Accepted publickey for core from 139.178.89.65 port 53210 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:43:27.747865 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:43:27.753150 systemd-logind[1548]: New session 9 of user core. Jul 10 00:43:27.762866 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:43:28.052417 sshd[5939]: Connection closed by 139.178.89.65 port 53210 Jul 10 00:43:28.053031 sshd-session[5937]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:28.057326 systemd[1]: sshd@13-172.238.161.214:22-139.178.89.65:53210.service: Deactivated successfully. Jul 10 00:43:28.060220 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:43:28.061348 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:43:28.062791 systemd-logind[1548]: Removed session 9. Jul 10 00:43:30.415019 kubelet[2714]: E0710 00:43:30.414344 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:43:31.924921 containerd[1580]: time="2025-07-10T00:43:31.924876596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357\" id:\"8e30a752f38341f5f68fd1ed9819e123ffe73e7daeedc99ccb386c16aac3a79a\" pid:5964 exited_at:{seconds:1752108211 nanos:924568447}" Jul 10 00:43:33.112713 systemd[1]: Started sshd@14-172.238.161.214:22-139.178.89.65:55348.service - OpenSSH per-connection server daemon (139.178.89.65:55348). Jul 10 00:43:33.453529 sshd[5977]: Accepted publickey for core from 139.178.89.65 port 55348 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:43:33.455364 sshd-session[5977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:43:33.460029 systemd-logind[1548]: New session 10 of user core. Jul 10 00:43:33.464774 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:43:33.753846 sshd[5979]: Connection closed by 139.178.89.65 port 55348 Jul 10 00:43:33.754384 sshd-session[5977]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:33.758394 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:43:33.759086 systemd[1]: sshd@14-172.238.161.214:22-139.178.89.65:55348.service: Deactivated successfully. Jul 10 00:43:33.761447 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:43:33.764419 systemd-logind[1548]: Removed session 10. Jul 10 00:43:38.822373 systemd[1]: Started sshd@15-172.238.161.214:22-139.178.89.65:55362.service - OpenSSH per-connection server daemon (139.178.89.65:55362). Jul 10 00:43:39.166771 sshd[5992]: Accepted publickey for core from 139.178.89.65 port 55362 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:43:39.168081 sshd-session[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:43:39.173614 systemd-logind[1548]: New session 11 of user core. Jul 10 00:43:39.181770 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:43:39.472178 sshd[5996]: Connection closed by 139.178.89.65 port 55362 Jul 10 00:43:39.472701 sshd-session[5992]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:39.476804 systemd[1]: sshd@15-172.238.161.214:22-139.178.89.65:55362.service: Deactivated successfully. Jul 10 00:43:39.479186 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:43:39.480415 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:43:39.481960 systemd-logind[1548]: Removed session 11. Jul 10 00:43:39.529374 systemd[1]: Started sshd@16-172.238.161.214:22-139.178.89.65:55364.service - OpenSSH per-connection server daemon (139.178.89.65:55364). Jul 10 00:43:39.865517 sshd[6009]: Accepted publickey for core from 139.178.89.65 port 55364 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:43:39.867100 sshd-session[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:43:39.871389 systemd-logind[1548]: New session 12 of user core. Jul 10 00:43:39.874790 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:43:40.183018 sshd[6011]: Connection closed by 139.178.89.65 port 55364 Jul 10 00:43:40.183353 sshd-session[6009]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:40.188774 systemd[1]: sshd@16-172.238.161.214:22-139.178.89.65:55364.service: Deactivated successfully. Jul 10 00:43:40.191359 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:43:40.192326 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:43:40.194692 systemd-logind[1548]: Removed session 12. Jul 10 00:43:40.251514 systemd[1]: Started sshd@17-172.238.161.214:22-139.178.89.65:49624.service - OpenSSH per-connection server daemon (139.178.89.65:49624). Jul 10 00:43:40.587816 sshd[6021]: Accepted publickey for core from 139.178.89.65 port 49624 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:43:40.589056 sshd-session[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:43:40.593654 systemd-logind[1548]: New session 13 of user core. Jul 10 00:43:40.597737 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:43:40.893414 sshd[6023]: Connection closed by 139.178.89.65 port 49624 Jul 10 00:43:40.893822 sshd-session[6021]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:40.898307 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:43:40.898969 systemd[1]: sshd@17-172.238.161.214:22-139.178.89.65:49624.service: Deactivated successfully. Jul 10 00:43:40.901350 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:43:40.903412 systemd-logind[1548]: Removed session 13. Jul 10 00:43:42.703537 containerd[1580]: time="2025-07-10T00:43:42.703493482Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"bc08309bedbc0abb1c8d8e3e4bff06aadec08ab1aebe178ab3de899acb9e6975\" pid:6047 exited_at:{seconds:1752108222 nanos:703170262}" Jul 10 00:43:45.960287 systemd[1]: Started sshd@18-172.238.161.214:22-139.178.89.65:49640.service - OpenSSH per-connection server daemon (139.178.89.65:49640). Jul 10 00:43:46.301921 sshd[6062]: Accepted publickey for core from 139.178.89.65 port 49640 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:43:46.303198 sshd-session[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:43:46.307829 systemd-logind[1548]: New session 14 of user core. Jul 10 00:43:46.314745 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:43:46.599040 sshd[6064]: Connection closed by 139.178.89.65 port 49640 Jul 10 00:43:46.599757 sshd-session[6062]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:46.603446 systemd[1]: sshd@18-172.238.161.214:22-139.178.89.65:49640.service: Deactivated successfully. Jul 10 00:43:46.605678 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:43:46.606688 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:43:46.608232 systemd-logind[1548]: Removed session 14. Jul 10 00:43:47.851444 containerd[1580]: time="2025-07-10T00:43:47.851398908Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" id:\"e28ee76db8d1d5e76583b9dcbc20f8e858213aa3da2bf925ca74b7394d148c9f\" pid:6090 exited_at:{seconds:1752108227 nanos:851219318}" Jul 10 00:43:50.414567 kubelet[2714]: E0710 00:43:50.414307 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:43:51.375882 containerd[1580]: time="2025-07-10T00:43:51.375837924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"17c64380075087ede59453b590586ea2b56548fc830a56a28c054e32a448a455\" pid:6112 exited_at:{seconds:1752108231 nanos:375659043}" Jul 10 00:43:51.659312 systemd[1]: Started sshd@19-172.238.161.214:22-139.178.89.65:57676.service - OpenSSH per-connection server daemon (139.178.89.65:57676). Jul 10 00:43:51.992515 sshd[6124]: Accepted publickey for core from 139.178.89.65 port 57676 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:43:51.994365 sshd-session[6124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:43:51.998581 systemd-logind[1548]: New session 15 of user core. Jul 10 00:43:52.002752 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:43:52.286667 sshd[6133]: Connection closed by 139.178.89.65 port 57676 Jul 10 00:43:52.287154 sshd-session[6124]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:52.291538 systemd[1]: sshd@19-172.238.161.214:22-139.178.89.65:57676.service: Deactivated successfully. Jul 10 00:43:52.293472 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:43:52.294589 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:43:52.295757 systemd-logind[1548]: Removed session 15. Jul 10 00:43:57.350527 systemd[1]: Started sshd@20-172.238.161.214:22-139.178.89.65:57690.service - OpenSSH per-connection server daemon (139.178.89.65:57690). Jul 10 00:43:57.686804 sshd[6158]: Accepted publickey for core from 139.178.89.65 port 57690 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:43:57.689879 sshd-session[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:43:57.698270 systemd-logind[1548]: New session 16 of user core. Jul 10 00:43:57.703836 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:43:57.992046 sshd[6160]: Connection closed by 139.178.89.65 port 57690 Jul 10 00:43:57.993449 sshd-session[6158]: pam_unix(sshd:session): session closed for user core Jul 10 00:43:57.997213 systemd[1]: sshd@20-172.238.161.214:22-139.178.89.65:57690.service: Deactivated successfully. Jul 10 00:43:58.000130 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:43:58.001707 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:43:58.004763 systemd-logind[1548]: Removed session 16. Jul 10 00:44:01.414791 kubelet[2714]: E0710 00:44:01.414589 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:44:01.928594 containerd[1580]: time="2025-07-10T00:44:01.928554031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c06e7a69f3285871438593d6fc0ba46d6705b28ab372a370162966bf3136357\" id:\"5d914169fda17f553ef2f5f9ea15630bc16ee2b2b4ca7993c06053b31d062d45\" pid:6183 exited_at:{seconds:1752108241 nanos:928115312}" Jul 10 00:44:03.053238 systemd[1]: Started sshd@21-172.238.161.214:22-139.178.89.65:50524.service - OpenSSH per-connection server daemon (139.178.89.65:50524). Jul 10 00:44:03.379658 sshd[6197]: Accepted publickey for core from 139.178.89.65 port 50524 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:44:03.381095 sshd-session[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:44:03.385991 systemd-logind[1548]: New session 17 of user core. Jul 10 00:44:03.388778 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:44:03.414651 kubelet[2714]: E0710 00:44:03.414013 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:44:03.677308 sshd[6199]: Connection closed by 139.178.89.65 port 50524 Jul 10 00:44:03.677921 sshd-session[6197]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:03.682246 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:44:03.683057 systemd[1]: sshd@21-172.238.161.214:22-139.178.89.65:50524.service: Deactivated successfully. Jul 10 00:44:03.685457 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:44:03.687533 systemd-logind[1548]: Removed session 17. Jul 10 00:44:03.742009 systemd[1]: Started sshd@22-172.238.161.214:22-139.178.89.65:50538.service - OpenSSH per-connection server daemon (139.178.89.65:50538). Jul 10 00:44:04.076870 sshd[6212]: Accepted publickey for core from 139.178.89.65 port 50538 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:44:04.078166 sshd-session[6212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:44:04.082747 systemd-logind[1548]: New session 18 of user core. Jul 10 00:44:04.088770 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:44:04.514755 sshd[6214]: Connection closed by 139.178.89.65 port 50538 Jul 10 00:44:04.515647 sshd-session[6212]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:04.520192 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:44:04.520760 systemd[1]: sshd@22-172.238.161.214:22-139.178.89.65:50538.service: Deactivated successfully. Jul 10 00:44:04.523292 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:44:04.525362 systemd-logind[1548]: Removed session 18. Jul 10 00:44:04.580854 systemd[1]: Started sshd@23-172.238.161.214:22-139.178.89.65:50550.service - OpenSSH per-connection server daemon (139.178.89.65:50550). Jul 10 00:44:04.920037 sshd[6224]: Accepted publickey for core from 139.178.89.65 port 50550 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:44:04.921478 sshd-session[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:44:04.926415 systemd-logind[1548]: New session 19 of user core. Jul 10 00:44:04.932736 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:44:05.831152 sshd[6226]: Connection closed by 139.178.89.65 port 50550 Jul 10 00:44:05.831729 sshd-session[6224]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:05.835867 systemd[1]: sshd@23-172.238.161.214:22-139.178.89.65:50550.service: Deactivated successfully. Jul 10 00:44:05.838439 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:44:05.839731 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:44:05.841812 systemd-logind[1548]: Removed session 19. Jul 10 00:44:05.890321 systemd[1]: Started sshd@24-172.238.161.214:22-139.178.89.65:50556.service - OpenSSH per-connection server daemon (139.178.89.65:50556). Jul 10 00:44:06.216779 sshd[6243]: Accepted publickey for core from 139.178.89.65 port 50556 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:44:06.218451 sshd-session[6243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:44:06.222945 systemd-logind[1548]: New session 20 of user core. Jul 10 00:44:06.227747 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:44:06.592788 sshd[6245]: Connection closed by 139.178.89.65 port 50556 Jul 10 00:44:06.593384 sshd-session[6243]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:06.597000 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:44:06.597747 systemd[1]: sshd@24-172.238.161.214:22-139.178.89.65:50556.service: Deactivated successfully. Jul 10 00:44:06.599818 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:44:06.601536 systemd-logind[1548]: Removed session 20. Jul 10 00:44:06.651766 systemd[1]: Started sshd@25-172.238.161.214:22-139.178.89.65:50562.service - OpenSSH per-connection server daemon (139.178.89.65:50562). Jul 10 00:44:06.981945 sshd[6254]: Accepted publickey for core from 139.178.89.65 port 50562 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:44:06.983537 sshd-session[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:44:06.988549 systemd-logind[1548]: New session 21 of user core. Jul 10 00:44:06.992745 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:44:07.275918 sshd[6256]: Connection closed by 139.178.89.65 port 50562 Jul 10 00:44:07.276779 sshd-session[6254]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:07.280687 systemd[1]: sshd@25-172.238.161.214:22-139.178.89.65:50562.service: Deactivated successfully. Jul 10 00:44:07.283240 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:44:07.284398 systemd-logind[1548]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:44:07.286015 systemd-logind[1548]: Removed session 21. Jul 10 00:44:12.334826 systemd[1]: Started sshd@26-172.238.161.214:22-139.178.89.65:49480.service - OpenSSH per-connection server daemon (139.178.89.65:49480). Jul 10 00:44:12.662707 sshd[6270]: Accepted publickey for core from 139.178.89.65 port 49480 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:44:12.663833 sshd-session[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:44:12.667909 systemd-logind[1548]: New session 22 of user core. Jul 10 00:44:12.673731 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:44:12.715666 containerd[1580]: time="2025-07-10T00:44:12.715611553Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdcd7ef351bb43747d3cae12f3100e0a1275509bb564412c83d3772f5f4a26b\" id:\"add10d5b5abfb659d7c9f6917c6f3fb85e936c245e8ef9f56cba99fe5849a273\" pid:6285 exited_at:{seconds:1752108252 nanos:715347742}" Jul 10 00:44:12.947236 sshd[6291]: Connection closed by 139.178.89.65 port 49480 Jul 10 00:44:12.947873 sshd-session[6270]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:12.951871 systemd[1]: sshd@26-172.238.161.214:22-139.178.89.65:49480.service: Deactivated successfully. Jul 10 00:44:12.953756 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:44:12.954606 systemd-logind[1548]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:44:12.956058 systemd-logind[1548]: Removed session 22. Jul 10 00:44:14.415148 kubelet[2714]: E0710 00:44:14.414535 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:44:17.855063 containerd[1580]: time="2025-07-10T00:44:17.855008277Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" id:\"693c93623acdf3eeb6a00f5fdc7bc3652def4bcd1b5bba9e9a904d51298513d2\" pid:6319 exited_at:{seconds:1752108257 nanos:854852917}" Jul 10 00:44:18.007934 systemd[1]: Started sshd@27-172.238.161.214:22-139.178.89.65:49486.service - OpenSSH per-connection server daemon (139.178.89.65:49486). Jul 10 00:44:18.330992 sshd[6329]: Accepted publickey for core from 139.178.89.65 port 49486 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:44:18.332198 sshd-session[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:44:18.335778 systemd-logind[1548]: New session 23 of user core. Jul 10 00:44:18.341756 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:44:18.616461 sshd[6331]: Connection closed by 139.178.89.65 port 49486 Jul 10 00:44:18.616980 sshd-session[6329]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:18.619652 systemd[1]: sshd@27-172.238.161.214:22-139.178.89.65:49486.service: Deactivated successfully. Jul 10 00:44:18.621746 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:44:18.622834 systemd-logind[1548]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:44:18.624701 systemd-logind[1548]: Removed session 23. Jul 10 00:44:23.682847 systemd[1]: Started sshd@28-172.238.161.214:22-139.178.89.65:34650.service - OpenSSH per-connection server daemon (139.178.89.65:34650). Jul 10 00:44:24.021373 sshd[6343]: Accepted publickey for core from 139.178.89.65 port 34650 ssh2: RSA SHA256:gZ/T5e+JxZJH1ewp2UwuRA38busheRHGClhkx1PKEdc Jul 10 00:44:24.022699 sshd-session[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:44:24.028224 systemd-logind[1548]: New session 24 of user core. Jul 10 00:44:24.034764 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:44:24.319062 sshd[6345]: Connection closed by 139.178.89.65 port 34650 Jul 10 00:44:24.319912 sshd-session[6343]: pam_unix(sshd:session): session closed for user core Jul 10 00:44:24.324136 systemd[1]: sshd@28-172.238.161.214:22-139.178.89.65:34650.service: Deactivated successfully. Jul 10 00:44:24.326523 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:44:24.327469 systemd-logind[1548]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:44:24.329543 systemd-logind[1548]: Removed session 24. Jul 10 00:44:25.414202 kubelet[2714]: E0710 00:44:25.414107 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jul 10 00:44:26.571116 containerd[1580]: time="2025-07-10T00:44:26.571056108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56b0c58117c9e9d933d0512ec72428302a1dd6ae8da249bda35cae06f0f43a05\" id:\"6c411e3e5325e5e4adc99c1919e32c0c1a65c72f45ff20afa7b456ec9f5a4f62\" pid:6370 exited_at:{seconds:1752108266 nanos:570703319}"