Jul 15 05:19:54.835377 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 03:28:48 -00 2025 Jul 15 05:19:54.835399 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:19:54.835408 kernel: BIOS-provided physical RAM map: Jul 15 05:19:54.835416 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jul 15 05:19:54.835422 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jul 15 05:19:54.835427 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 15 05:19:54.835434 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jul 15 05:19:54.835440 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jul 15 05:19:54.835445 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 05:19:54.835451 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 15 05:19:54.835457 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 05:19:54.835462 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 15 05:19:54.835470 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jul 15 05:19:54.835476 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 05:19:54.835482 kernel: NX (Execute Disable) protection: active Jul 15 05:19:54.835489 kernel: APIC: Static calls initialized Jul 15 05:19:54.835495 kernel: SMBIOS 2.8 present. Jul 15 05:19:54.835564 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jul 15 05:19:54.835570 kernel: DMI: Memory slots populated: 1/1 Jul 15 05:19:54.835576 kernel: Hypervisor detected: KVM Jul 15 05:19:54.835582 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 05:19:54.835589 kernel: kvm-clock: using sched offset of 5496983904 cycles Jul 15 05:19:54.835595 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 05:19:54.835601 kernel: tsc: Detected 1999.997 MHz processor Jul 15 05:19:54.835608 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 05:19:54.835614 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 05:19:54.835621 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jul 15 05:19:54.835629 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 15 05:19:54.835636 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 05:19:54.835642 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jul 15 05:19:54.835648 kernel: Using GB pages for direct mapping Jul 15 05:19:54.835654 kernel: ACPI: Early table checksum verification disabled Jul 15 05:19:54.835660 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jul 15 05:19:54.835667 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:19:54.835673 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:19:54.835679 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:19:54.835687 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 15 05:19:54.835694 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:19:54.835700 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:19:54.835706 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:19:54.835715 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:19:54.835722 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jul 15 05:19:54.835731 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jul 15 05:19:54.835737 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 15 05:19:54.835744 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jul 15 05:19:54.835750 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jul 15 05:19:54.835757 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jul 15 05:19:54.835763 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jul 15 05:19:54.835770 kernel: No NUMA configuration found Jul 15 05:19:54.835776 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jul 15 05:19:54.835784 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jul 15 05:19:54.835791 kernel: Zone ranges: Jul 15 05:19:54.835797 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 05:19:54.835804 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 15 05:19:54.835810 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jul 15 05:19:54.835817 kernel: Device empty Jul 15 05:19:54.835823 kernel: Movable zone start for each node Jul 15 05:19:54.835830 kernel: Early memory node ranges Jul 15 05:19:54.835836 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 15 05:19:54.835843 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jul 15 05:19:54.835851 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jul 15 05:19:54.835857 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jul 15 05:19:54.835864 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 05:19:54.835870 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 05:19:54.835877 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 15 05:19:54.835883 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 05:19:54.835890 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 05:19:54.835897 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 05:19:54.835903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 05:19:54.835911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 05:19:54.835918 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 05:19:54.835924 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 05:19:54.835931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 05:19:54.835937 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 05:19:54.835944 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 05:19:54.835950 kernel: TSC deadline timer available Jul 15 05:19:54.835957 kernel: CPU topo: Max. logical packages: 1 Jul 15 05:19:54.835963 kernel: CPU topo: Max. logical dies: 1 Jul 15 05:19:54.835971 kernel: CPU topo: Max. dies per package: 1 Jul 15 05:19:54.835977 kernel: CPU topo: Max. threads per core: 1 Jul 15 05:19:54.835984 kernel: CPU topo: Num. cores per package: 2 Jul 15 05:19:54.835990 kernel: CPU topo: Num. threads per package: 2 Jul 15 05:19:54.835997 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 15 05:19:54.836003 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 05:19:54.836010 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 05:19:54.836016 kernel: kvm-guest: setup PV sched yield Jul 15 05:19:54.836023 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 15 05:19:54.836031 kernel: Booting paravirtualized kernel on KVM Jul 15 05:19:54.836038 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 05:19:54.836044 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 15 05:19:54.836051 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 15 05:19:54.836057 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 15 05:19:54.836064 kernel: pcpu-alloc: [0] 0 1 Jul 15 05:19:54.836070 kernel: kvm-guest: PV spinlocks enabled Jul 15 05:19:54.836076 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 05:19:54.836084 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:19:54.836093 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 05:19:54.836099 kernel: random: crng init done Jul 15 05:19:54.836106 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 05:19:54.836112 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 05:19:54.836119 kernel: Fallback order for Node 0: 0 Jul 15 05:19:54.836125 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 15 05:19:54.836132 kernel: Policy zone: Normal Jul 15 05:19:54.836138 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 05:19:54.836146 kernel: software IO TLB: area num 2. Jul 15 05:19:54.836153 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 05:19:54.836159 kernel: ftrace: allocating 40097 entries in 157 pages Jul 15 05:19:54.836166 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 05:19:54.836172 kernel: Dynamic Preempt: voluntary Jul 15 05:19:54.836179 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 05:19:54.836186 kernel: rcu: RCU event tracing is enabled. Jul 15 05:19:54.836193 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 05:19:54.836200 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 05:19:54.836206 kernel: Rude variant of Tasks RCU enabled. Jul 15 05:19:54.836214 kernel: Tracing variant of Tasks RCU enabled. Jul 15 05:19:54.836221 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 05:19:54.836227 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 05:19:54.836234 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:19:54.836246 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:19:54.836255 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:19:54.836262 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 15 05:19:54.836269 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 05:19:54.836276 kernel: Console: colour VGA+ 80x25 Jul 15 05:19:54.836282 kernel: printk: legacy console [tty0] enabled Jul 15 05:19:54.836289 kernel: printk: legacy console [ttyS0] enabled Jul 15 05:19:54.836298 kernel: ACPI: Core revision 20240827 Jul 15 05:19:54.836305 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 05:19:54.836312 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 05:19:54.836318 kernel: x2apic enabled Jul 15 05:19:54.836325 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 05:19:54.836334 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 15 05:19:54.836340 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 15 05:19:54.836347 kernel: kvm-guest: setup PV IPIs Jul 15 05:19:54.836354 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 05:19:54.836361 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Jul 15 05:19:54.836367 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Jul 15 05:19:54.836374 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 05:19:54.836381 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 05:19:54.836387 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 05:19:54.836396 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 05:19:54.836402 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 05:19:54.836409 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 05:19:54.836416 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 15 05:19:54.836422 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 05:19:54.836429 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 05:19:54.836436 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 15 05:19:54.836443 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 15 05:19:54.836452 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 15 05:19:54.836458 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 05:19:54.836465 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 05:19:54.836472 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 05:19:54.836478 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 15 05:19:54.836485 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 05:19:54.836492 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jul 15 05:19:54.836527 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jul 15 05:19:54.836534 kernel: Freeing SMP alternatives memory: 32K Jul 15 05:19:54.836543 kernel: pid_max: default: 32768 minimum: 301 Jul 15 05:19:54.836550 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 05:19:54.836557 kernel: landlock: Up and running. Jul 15 05:19:54.836563 kernel: SELinux: Initializing. Jul 15 05:19:54.836570 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:19:54.836577 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:19:54.836583 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jul 15 05:19:54.836590 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 05:19:54.836597 kernel: ... version: 0 Jul 15 05:19:54.836605 kernel: ... bit width: 48 Jul 15 05:19:54.836611 kernel: ... generic registers: 6 Jul 15 05:19:54.836618 kernel: ... value mask: 0000ffffffffffff Jul 15 05:19:54.836625 kernel: ... max period: 00007fffffffffff Jul 15 05:19:54.836631 kernel: ... fixed-purpose events: 0 Jul 15 05:19:54.836638 kernel: ... event mask: 000000000000003f Jul 15 05:19:54.836644 kernel: signal: max sigframe size: 3376 Jul 15 05:19:54.836651 kernel: rcu: Hierarchical SRCU implementation. Jul 15 05:19:54.836658 kernel: rcu: Max phase no-delay instances is 400. Jul 15 05:19:54.836666 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 05:19:54.836673 kernel: smp: Bringing up secondary CPUs ... Jul 15 05:19:54.836679 kernel: smpboot: x86: Booting SMP configuration: Jul 15 05:19:54.836686 kernel: .... node #0, CPUs: #1 Jul 15 05:19:54.836693 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 05:19:54.836699 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Jul 15 05:19:54.836706 kernel: Memory: 3961808K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 227288K reserved, 0K cma-reserved) Jul 15 05:19:54.836713 kernel: devtmpfs: initialized Jul 15 05:19:54.836720 kernel: x86/mm: Memory block size: 128MB Jul 15 05:19:54.836728 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 05:19:54.836735 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 05:19:54.836741 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 05:19:54.836748 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 05:19:54.836755 kernel: audit: initializing netlink subsys (disabled) Jul 15 05:19:54.836761 kernel: audit: type=2000 audit(1752556791.887:1): state=initialized audit_enabled=0 res=1 Jul 15 05:19:54.836768 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 05:19:54.836775 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 05:19:54.836781 kernel: cpuidle: using governor menu Jul 15 05:19:54.836789 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 05:19:54.836796 kernel: dca service started, version 1.12.1 Jul 15 05:19:54.836803 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 15 05:19:54.836809 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 15 05:19:54.836816 kernel: PCI: Using configuration type 1 for base access Jul 15 05:19:54.836823 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 05:19:54.836829 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 05:19:54.836836 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 05:19:54.836843 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 05:19:54.836851 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 05:19:54.836858 kernel: ACPI: Added _OSI(Module Device) Jul 15 05:19:54.836864 kernel: ACPI: Added _OSI(Processor Device) Jul 15 05:19:54.836871 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 05:19:54.836877 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 05:19:54.836884 kernel: ACPI: Interpreter enabled Jul 15 05:19:54.836890 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 05:19:54.836897 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 05:19:54.836904 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 05:19:54.836912 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 05:19:54.836919 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 05:19:54.836925 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 05:19:54.837143 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 05:19:54.837261 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 05:19:54.837370 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 05:19:54.837379 kernel: PCI host bridge to bus 0000:00 Jul 15 05:19:54.837532 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 05:19:54.837650 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 05:19:54.837752 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 05:19:54.837852 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 15 05:19:54.837948 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 05:19:54.838043 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jul 15 05:19:54.838480 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 05:19:54.838718 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 15 05:19:54.838849 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 05:19:54.838958 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 15 05:19:54.839064 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 15 05:19:54.839167 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 15 05:19:54.839271 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 05:19:54.839388 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jul 15 05:19:54.840249 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jul 15 05:19:54.840380 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 15 05:19:54.840490 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 15 05:19:54.840633 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 05:19:54.840740 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jul 15 05:19:54.840846 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 15 05:19:54.840957 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 15 05:19:54.841064 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 15 05:19:54.841181 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 15 05:19:54.841287 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 05:19:54.841408 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 15 05:19:54.842913 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jul 15 05:19:54.843031 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jul 15 05:19:54.843157 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 15 05:19:54.843265 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 15 05:19:54.843274 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 05:19:54.843282 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 05:19:54.843289 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 05:19:54.843295 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 05:19:54.843302 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 05:19:54.843309 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 05:19:54.843318 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 05:19:54.843325 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 05:19:54.843332 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 05:19:54.843338 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 05:19:54.843345 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 05:19:54.843351 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 05:19:54.843358 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 05:19:54.843365 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 05:19:54.843371 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 05:19:54.843380 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 05:19:54.843386 kernel: iommu: Default domain type: Translated Jul 15 05:19:54.843393 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 05:19:54.843400 kernel: PCI: Using ACPI for IRQ routing Jul 15 05:19:54.843406 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 05:19:54.843413 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jul 15 05:19:54.843420 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jul 15 05:19:54.843585 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 05:19:54.843699 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 05:19:54.843819 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 05:19:54.843829 kernel: vgaarb: loaded Jul 15 05:19:54.843836 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 05:19:54.843843 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 05:19:54.843850 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 05:19:54.843856 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 05:19:54.843863 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 05:19:54.843870 kernel: pnp: PnP ACPI init Jul 15 05:19:54.843993 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 05:19:54.844003 kernel: pnp: PnP ACPI: found 5 devices Jul 15 05:19:54.844010 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 05:19:54.844017 kernel: NET: Registered PF_INET protocol family Jul 15 05:19:54.844024 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 05:19:54.844031 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 05:19:54.844037 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 05:19:54.844044 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 05:19:54.844054 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 05:19:54.844060 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 05:19:54.844067 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:19:54.844074 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:19:54.844081 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 05:19:54.844087 kernel: NET: Registered PF_XDP protocol family Jul 15 05:19:54.844185 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 05:19:54.844288 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 05:19:54.844386 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 05:19:54.844486 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 15 05:19:54.845633 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 05:19:54.845739 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jul 15 05:19:54.845749 kernel: PCI: CLS 0 bytes, default 64 Jul 15 05:19:54.845756 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 15 05:19:54.845764 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jul 15 05:19:54.845770 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Jul 15 05:19:54.845777 kernel: Initialise system trusted keyrings Jul 15 05:19:54.845788 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 05:19:54.845795 kernel: Key type asymmetric registered Jul 15 05:19:54.845801 kernel: Asymmetric key parser 'x509' registered Jul 15 05:19:54.845808 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 05:19:54.845815 kernel: io scheduler mq-deadline registered Jul 15 05:19:54.845822 kernel: io scheduler kyber registered Jul 15 05:19:54.845828 kernel: io scheduler bfq registered Jul 15 05:19:54.845835 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 05:19:54.845842 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 05:19:54.845851 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 05:19:54.845858 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 05:19:54.845865 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 05:19:54.845872 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 05:19:54.845878 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 05:19:54.845885 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 05:19:54.846008 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 15 05:19:54.846018 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 05:19:54.846117 kernel: rtc_cmos 00:03: registered as rtc0 Jul 15 05:19:54.846219 kernel: rtc_cmos 00:03: setting system clock to 2025-07-15T05:19:54 UTC (1752556794) Jul 15 05:19:54.846318 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 05:19:54.846327 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 15 05:19:54.846334 kernel: NET: Registered PF_INET6 protocol family Jul 15 05:19:54.846340 kernel: Segment Routing with IPv6 Jul 15 05:19:54.846347 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 05:19:54.846354 kernel: NET: Registered PF_PACKET protocol family Jul 15 05:19:54.846360 kernel: Key type dns_resolver registered Jul 15 05:19:54.846370 kernel: IPI shorthand broadcast: enabled Jul 15 05:19:54.846377 kernel: sched_clock: Marking stable (2498005223, 208249589)->(2731413207, -25158395) Jul 15 05:19:54.846383 kernel: registered taskstats version 1 Jul 15 05:19:54.846390 kernel: Loading compiled-in X.509 certificates Jul 15 05:19:54.846397 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: a24478b628e55368911ce1800a2bd6bc158938c7' Jul 15 05:19:54.846404 kernel: Demotion targets for Node 0: null Jul 15 05:19:54.846410 kernel: Key type .fscrypt registered Jul 15 05:19:54.846417 kernel: Key type fscrypt-provisioning registered Jul 15 05:19:54.846424 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 05:19:54.846433 kernel: ima: Allocated hash algorithm: sha1 Jul 15 05:19:54.846439 kernel: ima: No architecture policies found Jul 15 05:19:54.846446 kernel: clk: Disabling unused clocks Jul 15 05:19:54.846453 kernel: Warning: unable to open an initial console. Jul 15 05:19:54.846460 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 15 05:19:54.846466 kernel: Write protecting the kernel read-only data: 24576k Jul 15 05:19:54.846473 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 05:19:54.846480 kernel: Run /init as init process Jul 15 05:19:54.846487 kernel: with arguments: Jul 15 05:19:54.846511 kernel: /init Jul 15 05:19:54.846518 kernel: with environment: Jul 15 05:19:54.846525 kernel: HOME=/ Jul 15 05:19:54.846532 kernel: TERM=linux Jul 15 05:19:54.846538 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 05:19:54.846560 systemd[1]: Successfully made /usr/ read-only. Jul 15 05:19:54.846572 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:19:54.846582 systemd[1]: Detected virtualization kvm. Jul 15 05:19:54.846589 systemd[1]: Detected architecture x86-64. Jul 15 05:19:54.846596 systemd[1]: Running in initrd. Jul 15 05:19:54.846603 systemd[1]: No hostname configured, using default hostname. Jul 15 05:19:54.846611 systemd[1]: Hostname set to . Jul 15 05:19:54.846618 systemd[1]: Initializing machine ID from random generator. Jul 15 05:19:54.846625 systemd[1]: Queued start job for default target initrd.target. Jul 15 05:19:54.846632 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:19:54.846640 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:19:54.846650 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 05:19:54.846657 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:19:54.846665 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 05:19:54.846673 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 05:19:54.846681 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 05:19:54.846689 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 05:19:54.846699 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:19:54.846707 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:19:54.846714 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:19:54.846721 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:19:54.846728 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:19:54.846736 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:19:54.846743 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:19:54.846750 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:19:54.846757 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 05:19:54.846766 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 05:19:54.846774 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:19:54.846781 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:19:54.846788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:19:54.846796 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:19:54.846803 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 05:19:54.846812 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:19:54.846819 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 05:19:54.846827 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 05:19:54.846834 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 05:19:54.846841 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:19:54.846849 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:19:54.846856 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:19:54.846863 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 05:19:54.846893 systemd-journald[206]: Collecting audit messages is disabled. Jul 15 05:19:54.846914 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:19:54.846921 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 05:19:54.846929 systemd-journald[206]: Journal started Jul 15 05:19:54.846946 systemd-journald[206]: Runtime Journal (/run/log/journal/3c90730017c24eb798fd9df4ebeb2cec) is 8M, max 78.5M, 70.5M free. Jul 15 05:19:54.822658 systemd-modules-load[207]: Inserted module 'overlay' Jul 15 05:19:54.857614 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:19:54.857631 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 05:19:54.859529 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:19:54.859552 kernel: Bridge firewalling registered Jul 15 05:19:54.862984 systemd-modules-load[207]: Inserted module 'br_netfilter' Jul 15 05:19:54.901780 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:19:54.919277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:19:54.920335 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:19:54.924786 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 05:19:54.926725 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:19:54.928762 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:19:54.934629 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:19:54.942629 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:19:54.949224 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:19:54.950674 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:19:54.953620 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 05:19:54.954486 systemd-tmpfiles[227]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 05:19:54.966668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:19:54.969601 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:19:54.978185 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:19:55.011323 systemd-resolved[244]: Positive Trust Anchors: Jul 15 05:19:55.011911 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:19:55.011933 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:19:55.014632 systemd-resolved[244]: Defaulting to hostname 'linux'. Jul 15 05:19:55.017289 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:19:55.017981 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:19:55.067541 kernel: SCSI subsystem initialized Jul 15 05:19:55.074524 kernel: Loading iSCSI transport class v2.0-870. Jul 15 05:19:55.084529 kernel: iscsi: registered transport (tcp) Jul 15 05:19:55.103769 kernel: iscsi: registered transport (qla4xxx) Jul 15 05:19:55.103796 kernel: QLogic iSCSI HBA Driver Jul 15 05:19:55.123636 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:19:55.139560 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:19:55.141594 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:19:55.197414 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 05:19:55.199757 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 05:19:55.250526 kernel: raid6: avx2x4 gen() 27922 MB/s Jul 15 05:19:55.267525 kernel: raid6: avx2x2 gen() 26541 MB/s Jul 15 05:19:55.285668 kernel: raid6: avx2x1 gen() 16223 MB/s Jul 15 05:19:55.285684 kernel: raid6: using algorithm avx2x4 gen() 27922 MB/s Jul 15 05:19:55.304551 kernel: raid6: .... xor() 3086 MB/s, rmw enabled Jul 15 05:19:55.304581 kernel: raid6: using avx2x2 recovery algorithm Jul 15 05:19:55.320535 kernel: xor: automatically using best checksumming function avx Jul 15 05:19:55.448537 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 05:19:55.456024 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:19:55.458836 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:19:55.476550 systemd-udevd[453]: Using default interface naming scheme 'v255'. Jul 15 05:19:55.480750 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:19:55.483512 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 05:19:55.508694 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Jul 15 05:19:55.537867 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:19:55.540213 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:19:55.618762 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:19:55.622592 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 05:19:55.747528 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 05:19:55.751523 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 15 05:19:55.753523 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jul 15 05:19:55.765780 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:19:55.766631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:19:55.768655 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:19:55.773514 kernel: scsi host0: Virtio SCSI HBA Jul 15 05:19:55.772297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:19:55.779523 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 15 05:19:55.785028 kernel: AES CTR mode by8 optimization enabled Jul 15 05:19:55.843608 kernel: sd 0:0:0:0: Power-on or device reset occurred Jul 15 05:19:55.843896 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jul 15 05:19:55.844044 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 15 05:19:55.844177 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jul 15 05:19:55.844307 kernel: libata version 3.00 loaded. Jul 15 05:19:55.844317 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 15 05:19:55.852520 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 05:19:55.852543 kernel: GPT:9289727 != 167739391 Jul 15 05:19:55.852553 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 05:19:55.852562 kernel: GPT:9289727 != 167739391 Jul 15 05:19:55.852576 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 05:19:55.852585 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 05:19:55.852596 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 15 05:19:55.861539 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 05:19:55.861710 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 05:19:55.863290 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 15 05:19:55.863443 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 15 05:19:55.863589 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 05:19:55.865512 kernel: scsi host1: ahci Jul 15 05:19:55.865733 kernel: scsi host2: ahci Jul 15 05:19:55.866549 kernel: scsi host3: ahci Jul 15 05:19:55.867512 kernel: scsi host4: ahci Jul 15 05:19:55.867657 kernel: scsi host5: ahci Jul 15 05:19:55.868581 kernel: scsi host6: ahci Jul 15 05:19:55.868730 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Jul 15 05:19:55.868745 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Jul 15 05:19:55.868755 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Jul 15 05:19:55.868763 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Jul 15 05:19:55.868772 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Jul 15 05:19:55.868780 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Jul 15 05:19:55.925818 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 15 05:19:55.942378 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 15 05:19:55.965036 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:19:55.982697 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 15 05:19:55.983297 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 15 05:19:55.991991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 15 05:19:55.994141 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 05:19:56.014490 disk-uuid[623]: Primary Header is updated. Jul 15 05:19:56.014490 disk-uuid[623]: Secondary Entries is updated. Jul 15 05:19:56.014490 disk-uuid[623]: Secondary Header is updated. Jul 15 05:19:56.025836 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 05:19:56.039518 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 05:19:56.190248 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 05:19:56.190316 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 05:19:56.190327 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 05:19:56.190336 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 05:19:56.190351 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 15 05:19:56.190360 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 05:19:56.208015 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 05:19:56.215607 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:19:56.216191 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:19:56.217407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:19:56.219370 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 05:19:56.237863 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:19:57.038599 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 05:19:57.038642 disk-uuid[624]: The operation has completed successfully. Jul 15 05:19:57.085982 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 05:19:57.086095 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 05:19:57.113905 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 05:19:57.125452 sh[652]: Success Jul 15 05:19:57.142535 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 05:19:57.142566 kernel: device-mapper: uevent: version 1.0.3 Jul 15 05:19:57.144005 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 05:19:57.154653 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 15 05:19:57.194311 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 05:19:57.197559 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 05:19:57.208963 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 05:19:57.222526 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 05:19:57.222554 kernel: BTRFS: device fsid eb96c768-dac4-4ca9-ae1d-82815d4ce00b devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (664) Jul 15 05:19:57.223604 kernel: BTRFS info (device dm-0): first mount of filesystem eb96c768-dac4-4ca9-ae1d-82815d4ce00b Jul 15 05:19:57.225784 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:19:57.227525 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 05:19:57.236133 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 05:19:57.236959 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:19:57.237841 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 05:19:57.238428 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 05:19:57.241926 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 05:19:57.266521 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (697) Jul 15 05:19:57.269863 kernel: BTRFS info (device sda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:19:57.269889 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:19:57.271615 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 05:19:57.280549 kernel: BTRFS info (device sda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:19:57.281403 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 05:19:57.283383 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 05:19:57.369174 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:19:57.372979 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:19:57.380872 ignition[758]: Ignition 2.21.0 Jul 15 05:19:57.380885 ignition[758]: Stage: fetch-offline Jul 15 05:19:57.380915 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:19:57.380924 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:19:57.380998 ignition[758]: parsed url from cmdline: "" Jul 15 05:19:57.381001 ignition[758]: no config URL provided Jul 15 05:19:57.383915 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:19:57.381006 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:19:57.381013 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:19:57.381018 ignition[758]: failed to fetch config: resource requires networking Jul 15 05:19:57.381244 ignition[758]: Ignition finished successfully Jul 15 05:19:57.409299 systemd-networkd[838]: lo: Link UP Jul 15 05:19:57.409310 systemd-networkd[838]: lo: Gained carrier Jul 15 05:19:57.410714 systemd-networkd[838]: Enumeration completed Jul 15 05:19:57.411060 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:19:57.411064 systemd-networkd[838]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:19:57.411886 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:19:57.412865 systemd-networkd[838]: eth0: Link UP Jul 15 05:19:57.412868 systemd-networkd[838]: eth0: Gained carrier Jul 15 05:19:57.412876 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:19:57.413546 systemd[1]: Reached target network.target - Network. Jul 15 05:19:57.414761 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 05:19:57.436197 ignition[843]: Ignition 2.21.0 Jul 15 05:19:57.436228 ignition[843]: Stage: fetch Jul 15 05:19:57.436361 ignition[843]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:19:57.436390 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:19:57.436473 ignition[843]: parsed url from cmdline: "" Jul 15 05:19:57.436477 ignition[843]: no config URL provided Jul 15 05:19:57.436483 ignition[843]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:19:57.436491 ignition[843]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:19:57.436541 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #1 Jul 15 05:19:57.436707 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 15 05:19:57.637206 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #2 Jul 15 05:19:57.637397 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 15 05:19:57.884615 systemd-networkd[838]: eth0: DHCPv4 address 172.237.133.19/24, gateway 172.237.133.1 acquired from 23.205.167.218 Jul 15 05:19:58.037818 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #3 Jul 15 05:19:58.137681 ignition[843]: PUT result: OK Jul 15 05:19:58.137741 ignition[843]: GET http://169.254.169.254/v1/user-data: attempt #1 Jul 15 05:19:58.248052 ignition[843]: GET result: OK Jul 15 05:19:58.248224 ignition[843]: parsing config with SHA512: 0ce7a7549109694a7d046af01a6c1b2fa29b44a572a28f17ae159960ab63e5be8ce761c3c8470e36eb43648670b6527a8bfa7886c2dc50469e9b607ec8580fcf Jul 15 05:19:58.254232 unknown[843]: fetched base config from "system" Jul 15 05:19:58.254246 unknown[843]: fetched base config from "system" Jul 15 05:19:58.254808 ignition[843]: fetch: fetch complete Jul 15 05:19:58.254253 unknown[843]: fetched user config from "akamai" Jul 15 05:19:58.254814 ignition[843]: fetch: fetch passed Jul 15 05:19:58.254855 ignition[843]: Ignition finished successfully Jul 15 05:19:58.258133 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 05:19:58.259645 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 05:19:58.292766 ignition[850]: Ignition 2.21.0 Jul 15 05:19:58.292778 ignition[850]: Stage: kargs Jul 15 05:19:58.292884 ignition[850]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:19:58.298530 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 05:19:58.292894 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:19:58.295118 ignition[850]: kargs: kargs passed Jul 15 05:19:58.295161 ignition[850]: Ignition finished successfully Jul 15 05:19:58.318610 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 05:19:58.345916 ignition[857]: Ignition 2.21.0 Jul 15 05:19:58.345927 ignition[857]: Stage: disks Jul 15 05:19:58.346042 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:19:58.346052 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:19:58.347833 ignition[857]: disks: disks passed Jul 15 05:19:58.347874 ignition[857]: Ignition finished successfully Jul 15 05:19:58.350273 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 05:19:58.351844 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 05:19:58.352613 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 05:19:58.353662 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:19:58.354871 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:19:58.356064 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:19:58.357916 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 05:19:58.399939 systemd-fsck[866]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 05:19:58.403091 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 05:19:58.406773 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 05:19:58.515539 kernel: EXT4-fs (sda9): mounted filesystem 277c3938-5262-4ab1-8fa3-62fde82f8257 r/w with ordered data mode. Quota mode: none. Jul 15 05:19:58.515965 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 05:19:58.516985 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 05:19:58.519213 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:19:58.520749 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 05:19:58.522718 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 05:19:58.523470 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 05:19:58.523510 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:19:58.532188 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 05:19:58.534629 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 05:19:58.544460 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (874) Jul 15 05:19:58.546321 kernel: BTRFS info (device sda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:19:58.546340 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:19:58.548015 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 05:19:58.553917 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:19:58.586726 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 05:19:58.592407 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory Jul 15 05:19:58.597776 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 05:19:58.602078 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 05:19:58.699740 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 05:19:58.701471 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 05:19:58.704677 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 05:19:58.723230 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 05:19:58.725744 kernel: BTRFS info (device sda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:19:58.739462 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 05:19:58.750812 ignition[987]: INFO : Ignition 2.21.0 Jul 15 05:19:58.750812 ignition[987]: INFO : Stage: mount Jul 15 05:19:58.751905 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:19:58.751905 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:19:58.758280 ignition[987]: INFO : mount: mount passed Jul 15 05:19:58.759180 ignition[987]: INFO : Ignition finished successfully Jul 15 05:19:58.762036 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 05:19:58.764309 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 05:19:59.260827 systemd-networkd[838]: eth0: Gained IPv6LL Jul 15 05:19:59.518447 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:19:59.536591 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (999) Jul 15 05:19:59.540387 kernel: BTRFS info (device sda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:19:59.540432 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:19:59.540444 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 05:19:59.546040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:19:59.579258 ignition[1015]: INFO : Ignition 2.21.0 Jul 15 05:19:59.581542 ignition[1015]: INFO : Stage: files Jul 15 05:19:59.581542 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:19:59.581542 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:19:59.583726 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Jul 15 05:19:59.583726 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 05:19:59.583726 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 05:19:59.586354 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 05:19:59.587168 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 05:19:59.587168 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 05:19:59.586823 unknown[1015]: wrote ssh authorized keys file for user: core Jul 15 05:19:59.589350 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 05:19:59.589350 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 15 05:20:00.024650 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 05:20:01.247316 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 05:20:01.247316 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 05:20:01.250116 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 05:20:01.250116 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:20:01.250116 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:20:01.250116 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:20:01.250116 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:20:01.250116 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:20:01.250116 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:20:01.257263 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:20:01.257263 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:20:01.257263 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:20:01.257263 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:20:01.257263 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:20:01.257263 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 15 05:20:01.650661 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 05:20:01.886778 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:20:01.886778 ignition[1015]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 05:20:01.889433 ignition[1015]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:20:01.890479 ignition[1015]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:20:01.890479 ignition[1015]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 05:20:01.890479 ignition[1015]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 15 05:20:01.890479 ignition[1015]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 15 05:20:01.890479 ignition[1015]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 15 05:20:01.890479 ignition[1015]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 15 05:20:01.890479 ignition[1015]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jul 15 05:20:01.890479 ignition[1015]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 05:20:01.899349 ignition[1015]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:20:01.899349 ignition[1015]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:20:01.899349 ignition[1015]: INFO : files: files passed Jul 15 05:20:01.899349 ignition[1015]: INFO : Ignition finished successfully Jul 15 05:20:01.893972 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 05:20:01.899616 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 05:20:01.903003 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 05:20:01.911555 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 05:20:01.911678 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 05:20:01.920204 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:20:01.920204 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:20:01.922634 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:20:01.923051 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:20:01.924597 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 05:20:01.926378 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 05:20:01.970322 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 05:20:01.970466 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 05:20:01.972178 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 05:20:01.973003 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 05:20:01.974255 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 05:20:01.975095 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 05:20:01.997275 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:20:01.999731 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 05:20:02.016201 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:20:02.017263 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:20:02.018577 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 05:20:02.019773 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 05:20:02.019905 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:20:02.021167 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 05:20:02.021946 systemd[1]: Stopped target basic.target - Basic System. Jul 15 05:20:02.023188 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 05:20:02.024246 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:20:02.025366 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 05:20:02.026573 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:20:02.027894 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 05:20:02.029085 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:20:02.030421 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 05:20:02.031605 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 05:20:02.032906 systemd[1]: Stopped target swap.target - Swaps. Jul 15 05:20:02.034034 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 05:20:02.034127 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:20:02.035510 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:20:02.036308 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:20:02.037379 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 05:20:02.039621 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:20:02.040611 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 05:20:02.040740 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 05:20:02.042250 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 05:20:02.042361 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:20:02.043157 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 05:20:02.043284 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 05:20:02.046588 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 05:20:02.047231 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 05:20:02.047387 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:20:02.055638 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 05:20:02.056162 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 05:20:02.056265 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:20:02.059041 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 05:20:02.059167 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:20:02.065356 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 05:20:02.066377 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 05:20:02.077259 ignition[1070]: INFO : Ignition 2.21.0 Jul 15 05:20:02.079157 ignition[1070]: INFO : Stage: umount Jul 15 05:20:02.079157 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:20:02.079157 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:20:02.079157 ignition[1070]: INFO : umount: umount passed Jul 15 05:20:02.079157 ignition[1070]: INFO : Ignition finished successfully Jul 15 05:20:02.084000 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 05:20:02.084716 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 05:20:02.088727 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 05:20:02.089422 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 05:20:02.089511 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 05:20:02.108288 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 05:20:02.108340 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 05:20:02.109308 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 05:20:02.109356 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 05:20:02.110350 systemd[1]: Stopped target network.target - Network. Jul 15 05:20:02.111357 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 05:20:02.111408 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:20:02.112809 systemd[1]: Stopped target paths.target - Path Units. Jul 15 05:20:02.113862 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 05:20:02.113921 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:20:02.114976 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 05:20:02.116032 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 05:20:02.117088 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 05:20:02.117132 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:20:02.118277 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 05:20:02.118316 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:20:02.119450 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 05:20:02.119517 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 05:20:02.120525 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 05:20:02.120571 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 05:20:02.121917 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 05:20:02.123105 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 05:20:02.125040 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 05:20:02.125144 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 05:20:02.127114 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 05:20:02.127182 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 05:20:02.132737 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 05:20:02.132857 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 05:20:02.137027 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 05:20:02.137254 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 05:20:02.137379 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 05:20:02.139361 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 05:20:02.139928 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 05:20:02.141060 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 05:20:02.141100 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:20:02.143050 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 05:20:02.144866 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 05:20:02.144938 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:20:02.147490 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:20:02.147559 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:20:02.149451 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 05:20:02.149521 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 05:20:02.150641 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 05:20:02.150693 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:20:02.153569 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:20:02.156108 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 05:20:02.156173 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:20:02.170044 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 05:20:02.170217 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:20:02.171311 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 05:20:02.171414 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 05:20:02.172783 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 05:20:02.172853 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 05:20:02.174283 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 05:20:02.174322 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:20:02.175318 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 05:20:02.175366 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:20:02.176975 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 05:20:02.177023 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 05:20:02.178194 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 05:20:02.178244 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:20:02.180607 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 05:20:02.181835 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 05:20:02.181889 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:20:02.185033 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 05:20:02.185083 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:20:02.187403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:20:02.187451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:20:02.189809 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 05:20:02.189865 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 05:20:02.189908 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:20:02.196593 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 05:20:02.196710 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 05:20:02.198352 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 05:20:02.200615 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 05:20:02.232959 systemd[1]: Switching root. Jul 15 05:20:02.265559 systemd-journald[206]: Journal stopped Jul 15 05:20:03.351193 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Jul 15 05:20:03.351219 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 05:20:03.351233 kernel: SELinux: policy capability open_perms=1 Jul 15 05:20:03.351245 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 05:20:03.351253 kernel: SELinux: policy capability always_check_network=0 Jul 15 05:20:03.351262 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 05:20:03.351271 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 05:20:03.351280 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 05:20:03.351288 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 05:20:03.351297 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 05:20:03.351308 kernel: audit: type=1403 audit(1752556802.423:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 05:20:03.351318 systemd[1]: Successfully loaded SELinux policy in 67.934ms. Jul 15 05:20:03.351328 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.040ms. Jul 15 05:20:03.351339 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:20:03.351349 systemd[1]: Detected virtualization kvm. Jul 15 05:20:03.351363 systemd[1]: Detected architecture x86-64. Jul 15 05:20:03.351372 systemd[1]: Detected first boot. Jul 15 05:20:03.351382 systemd[1]: Initializing machine ID from random generator. Jul 15 05:20:03.351392 zram_generator::config[1114]: No configuration found. Jul 15 05:20:03.351402 kernel: Guest personality initialized and is inactive Jul 15 05:20:03.351411 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 05:20:03.351420 kernel: Initialized host personality Jul 15 05:20:03.351431 kernel: NET: Registered PF_VSOCK protocol family Jul 15 05:20:03.351441 systemd[1]: Populated /etc with preset unit settings. Jul 15 05:20:03.351451 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 05:20:03.351461 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 05:20:03.351471 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 05:20:03.351481 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 05:20:03.351491 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 05:20:03.352449 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 05:20:03.352469 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 05:20:03.352482 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 05:20:03.352493 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 05:20:03.352546 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 05:20:03.352557 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 05:20:03.352567 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 05:20:03.352580 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:20:03.352590 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:20:03.352600 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 05:20:03.352610 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 05:20:03.352623 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 05:20:03.352634 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:20:03.352644 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 05:20:03.352654 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:20:03.352666 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:20:03.352676 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 05:20:03.352686 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 05:20:03.352697 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 05:20:03.352707 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 05:20:03.352717 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:20:03.352727 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:20:03.352737 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:20:03.352748 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:20:03.352759 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 05:20:03.352769 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 05:20:03.352779 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 05:20:03.352789 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:20:03.352801 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:20:03.352811 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:20:03.352821 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 05:20:03.352832 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 05:20:03.352841 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 05:20:03.352852 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 05:20:03.352862 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:03.352872 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 05:20:03.352884 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 05:20:03.352894 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 05:20:03.352904 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 05:20:03.352914 systemd[1]: Reached target machines.target - Containers. Jul 15 05:20:03.352924 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 05:20:03.352934 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:20:03.352944 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:20:03.352954 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 05:20:03.352966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:20:03.352976 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:20:03.352986 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:20:03.352996 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 05:20:03.353007 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:20:03.353018 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 05:20:03.353028 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 05:20:03.353038 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 05:20:03.353047 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 05:20:03.353059 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 05:20:03.353070 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:20:03.353080 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:20:03.353090 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:20:03.353100 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:20:03.353110 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 05:20:03.353120 kernel: ACPI: bus type drm_connector registered Jul 15 05:20:03.353131 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 05:20:03.353142 kernel: loop: module loaded Jul 15 05:20:03.353152 kernel: fuse: init (API version 7.41) Jul 15 05:20:03.353162 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:20:03.353172 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 05:20:03.353181 systemd[1]: Stopped verity-setup.service. Jul 15 05:20:03.353192 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:03.353223 systemd-journald[1199]: Collecting audit messages is disabled. Jul 15 05:20:03.353245 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 05:20:03.353258 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 05:20:03.353268 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 05:20:03.353278 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 05:20:03.353288 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 05:20:03.353298 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 05:20:03.353310 systemd-journald[1199]: Journal started Jul 15 05:20:03.353330 systemd-journald[1199]: Runtime Journal (/run/log/journal/dc4eaa3a2863425ea9c59d6766f13583) is 8M, max 78.5M, 70.5M free. Jul 15 05:20:02.996054 systemd[1]: Queued start job for default target multi-user.target. Jul 15 05:20:03.012457 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 15 05:20:03.012918 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 05:20:03.359266 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:20:03.357790 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 05:20:03.358742 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:20:03.359705 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 05:20:03.359990 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 05:20:03.360894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:20:03.361177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:20:03.362343 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:20:03.362633 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:20:03.363488 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:20:03.363894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:20:03.364806 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 05:20:03.365087 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 05:20:03.365962 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:20:03.366181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:20:03.367297 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:20:03.368215 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:20:03.369269 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 05:20:03.370184 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 05:20:03.384236 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:20:03.386606 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 05:20:03.389573 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 05:20:03.390140 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 05:20:03.390167 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:20:03.391613 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 05:20:03.396258 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 05:20:03.396926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:20:03.402643 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 05:20:03.406658 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 05:20:03.407849 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:20:03.410644 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 05:20:03.413153 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:20:03.415648 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:20:03.418844 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 05:20:03.422596 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 05:20:03.428822 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 05:20:03.429810 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 05:20:03.440594 systemd-journald[1199]: Time spent on flushing to /var/log/journal/dc4eaa3a2863425ea9c59d6766f13583 is 29.064ms for 995 entries. Jul 15 05:20:03.440594 systemd-journald[1199]: System Journal (/var/log/journal/dc4eaa3a2863425ea9c59d6766f13583) is 8M, max 195.6M, 187.6M free. Jul 15 05:20:03.486403 systemd-journald[1199]: Received client request to flush runtime journal. Jul 15 05:20:03.486455 kernel: loop0: detected capacity change from 0 to 221472 Jul 15 05:20:03.456409 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 05:20:03.459110 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 05:20:03.462410 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 05:20:03.496625 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 05:20:03.509950 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 05:20:03.514557 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:20:03.516316 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:20:03.522205 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 05:20:03.538528 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 05:20:03.539700 kernel: loop1: detected capacity change from 0 to 8 Jul 15 05:20:03.542681 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:20:03.561583 kernel: loop2: detected capacity change from 0 to 114000 Jul 15 05:20:03.583340 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 15 05:20:03.583356 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 15 05:20:03.590962 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:20:03.602798 kernel: loop3: detected capacity change from 0 to 146488 Jul 15 05:20:03.646787 kernel: loop4: detected capacity change from 0 to 221472 Jul 15 05:20:03.669561 kernel: loop5: detected capacity change from 0 to 8 Jul 15 05:20:03.672524 kernel: loop6: detected capacity change from 0 to 114000 Jul 15 05:20:03.690634 kernel: loop7: detected capacity change from 0 to 146488 Jul 15 05:20:03.711777 (sd-merge)[1263]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jul 15 05:20:03.712459 (sd-merge)[1263]: Merged extensions into '/usr'. Jul 15 05:20:03.718184 systemd[1]: Reload requested from client PID 1239 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 05:20:03.718339 systemd[1]: Reloading... Jul 15 05:20:03.824526 zram_generator::config[1299]: No configuration found. Jul 15 05:20:03.909063 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:20:03.926039 ldconfig[1234]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 05:20:03.983335 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 05:20:03.983795 systemd[1]: Reloading finished in 262 ms. Jul 15 05:20:04.002420 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 05:20:04.003596 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 05:20:04.016617 systemd[1]: Starting ensure-sysext.service... Jul 15 05:20:04.018621 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:20:04.032920 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 05:20:04.037581 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:20:04.044660 systemd[1]: Reload requested from client PID 1332 ('systemctl') (unit ensure-sysext.service)... Jul 15 05:20:04.044673 systemd[1]: Reloading... Jul 15 05:20:04.048861 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 05:20:04.048909 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 05:20:04.049202 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 05:20:04.049561 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 05:20:04.050412 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 05:20:04.050740 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jul 15 05:20:04.050874 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jul 15 05:20:04.058319 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:20:04.058768 systemd-tmpfiles[1333]: Skipping /boot Jul 15 05:20:04.079105 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:20:04.079117 systemd-tmpfiles[1333]: Skipping /boot Jul 15 05:20:04.098373 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Jul 15 05:20:04.122522 zram_generator::config[1361]: No configuration found. Jul 15 05:20:04.262968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:20:04.353533 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 05:20:04.377521 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 15 05:20:04.391370 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 05:20:04.391724 systemd[1]: Reloading finished in 346 ms. Jul 15 05:20:04.400634 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:20:04.401686 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:20:04.406535 kernel: ACPI: button: Power Button [PWRF] Jul 15 05:20:04.413522 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 05:20:04.415561 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 05:20:04.440703 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:20:04.446939 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 05:20:04.449595 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 05:20:04.456680 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:20:04.460902 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:20:04.464101 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 05:20:04.473713 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:04.474687 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:20:04.480239 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:20:04.482984 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:20:04.485585 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:20:04.486245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:20:04.486331 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:20:04.486404 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:04.495577 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 05:20:04.498981 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:04.499130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:20:04.499272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:20:04.499343 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:20:04.499411 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:04.504688 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:04.504995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:20:04.513069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:20:04.514588 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:20:04.514677 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:20:04.514786 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:04.522355 systemd[1]: Finished ensure-sysext.service. Jul 15 05:20:04.535959 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 05:20:04.538726 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 05:20:04.541538 kernel: EDAC MC: Ver: 3.0.0 Jul 15 05:20:04.546094 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 05:20:04.555737 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 05:20:04.582534 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 05:20:04.600741 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 05:20:04.601806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:20:04.602035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:20:04.603526 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:20:04.604649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:20:04.610411 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:20:04.610648 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:20:04.614479 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:20:04.615561 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:20:04.623049 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 15 05:20:04.624044 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 05:20:04.628877 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 05:20:04.630562 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:20:04.630637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:20:04.648545 augenrules[1501]: No rules Jul 15 05:20:04.648786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:20:04.654895 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:20:04.655158 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:20:04.662758 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 05:20:04.721241 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 05:20:04.795444 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:20:04.873578 systemd-networkd[1452]: lo: Link UP Jul 15 05:20:04.873589 systemd-networkd[1452]: lo: Gained carrier Jul 15 05:20:04.876154 systemd-networkd[1452]: Enumeration completed Jul 15 05:20:04.876241 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:20:04.876884 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:20:04.876897 systemd-networkd[1452]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:20:04.880025 systemd-networkd[1452]: eth0: Link UP Jul 15 05:20:04.880162 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 05:20:04.880182 systemd-networkd[1452]: eth0: Gained carrier Jul 15 05:20:04.880197 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:20:04.886662 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 05:20:04.898019 systemd-resolved[1453]: Positive Trust Anchors: Jul 15 05:20:04.898038 systemd-resolved[1453]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:20:04.898065 systemd-resolved[1453]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:20:04.901349 systemd-resolved[1453]: Defaulting to hostname 'linux'. Jul 15 05:20:04.903283 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:20:04.904814 systemd[1]: Reached target network.target - Network. Jul 15 05:20:04.905559 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:20:04.908764 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 05:20:04.909394 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:20:04.910180 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 05:20:04.910945 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 05:20:04.911532 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 05:20:04.912074 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 05:20:04.912658 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 05:20:04.912688 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:20:04.913167 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 05:20:04.913830 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 05:20:04.914486 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 05:20:04.915076 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:20:04.916678 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 05:20:04.918819 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 05:20:04.921178 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 05:20:04.921925 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 05:20:04.922513 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 05:20:04.925380 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 05:20:04.926215 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 05:20:04.927733 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 05:20:04.928469 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 05:20:04.930627 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:20:04.931360 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:20:04.932137 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:20:04.932174 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:20:04.933403 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 05:20:04.935029 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 05:20:04.940460 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 05:20:04.942483 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 05:20:04.945635 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 05:20:04.950660 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 05:20:04.951200 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 05:20:04.955697 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 05:20:04.965125 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 05:20:04.971979 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 05:20:04.976034 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 05:20:04.980681 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 05:20:04.987327 jq[1532]: false Jul 15 05:20:04.990731 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 05:20:04.992934 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 05:20:04.993934 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 05:20:04.999465 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 05:20:05.004914 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 05:20:05.028647 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing passwd entry cache Jul 15 05:20:05.028654 oslogin_cache_refresh[1534]: Refreshing passwd entry cache Jul 15 05:20:05.030457 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting users, quitting Jul 15 05:20:05.030457 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:20:05.030457 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing group entry cache Jul 15 05:20:05.030391 oslogin_cache_refresh[1534]: Failure getting users, quitting Jul 15 05:20:05.030405 oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:20:05.030444 oslogin_cache_refresh[1534]: Refreshing group entry cache Jul 15 05:20:05.036025 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting groups, quitting Jul 15 05:20:05.036025 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:20:05.036016 oslogin_cache_refresh[1534]: Failure getting groups, quitting Jul 15 05:20:05.036026 oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:20:05.039525 extend-filesystems[1533]: Found /dev/sda6 Jul 15 05:20:05.046332 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 05:20:05.047735 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 05:20:05.048544 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 05:20:05.048881 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 05:20:05.049111 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 05:20:05.051157 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 05:20:05.054410 coreos-metadata[1529]: Jul 15 05:20:05.052 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 15 05:20:05.052394 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 05:20:05.056958 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 05:20:05.057532 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 05:20:05.064612 jq[1548]: true Jul 15 05:20:05.064811 extend-filesystems[1533]: Found /dev/sda9 Jul 15 05:20:05.072166 extend-filesystems[1533]: Checking size of /dev/sda9 Jul 15 05:20:05.091850 update_engine[1544]: I20250715 05:20:05.091017 1544 main.cc:92] Flatcar Update Engine starting Jul 15 05:20:05.108666 (ntainerd)[1571]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 05:20:05.116525 tar[1559]: linux-amd64/helm Jul 15 05:20:05.120534 extend-filesystems[1533]: Resized partition /dev/sda9 Jul 15 05:20:05.125423 jq[1567]: true Jul 15 05:20:05.132632 extend-filesystems[1579]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 05:20:05.141723 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jul 15 05:20:05.142556 dbus-daemon[1530]: [system] SELinux support is enabled Jul 15 05:20:05.142700 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 05:20:05.147752 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 05:20:05.147792 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 05:20:05.149858 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 05:20:05.149882 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 05:20:05.152419 systemd-logind[1541]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 05:20:05.152438 systemd-logind[1541]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 05:20:05.158217 systemd-logind[1541]: New seat seat0. Jul 15 05:20:05.161649 systemd[1]: Started update-engine.service - Update Engine. Jul 15 05:20:05.164334 update_engine[1544]: I20250715 05:20:05.163637 1544 update_check_scheduler.cc:74] Next update check in 8m20s Jul 15 05:20:05.169596 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 05:20:05.173420 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 05:20:05.248808 bash[1596]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:20:05.259070 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 05:20:05.266015 systemd[1]: Starting sshkeys.service... Jul 15 05:20:05.334243 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 05:20:05.336726 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 05:20:05.374713 systemd-networkd[1452]: eth0: DHCPv4 address 172.237.133.19/24, gateway 172.237.133.1 acquired from 23.205.167.218 Jul 15 05:20:05.376911 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jul 15 05:20:05.386009 dbus-daemon[1530]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1452 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 05:20:05.390453 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 05:20:05.487557 coreos-metadata[1605]: Jul 15 05:20:05.487 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 15 05:20:05.502255 systemd-timesyncd[1464]: Contacted time server 158.51.99.19:123 (0.flatcar.pool.ntp.org). Jul 15 05:20:05.502819 systemd-timesyncd[1464]: Initial clock synchronization to Tue 2025-07-15 05:20:05.455136 UTC. Jul 15 05:20:05.521946 containerd[1571]: time="2025-07-15T05:20:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 05:20:05.524178 containerd[1571]: time="2025-07-15T05:20:05.524025075Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 05:20:05.531678 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jul 15 05:20:05.537330 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 05:20:05.540714 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 05:20:05.541409 dbus-daemon[1530]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1609 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 05:20:05.547552 extend-filesystems[1579]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 15 05:20:05.547552 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 10 Jul 15 05:20:05.547552 extend-filesystems[1579]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jul 15 05:20:05.545284 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 05:20:05.553023 extend-filesystems[1533]: Resized filesystem in /dev/sda9 Jul 15 05:20:05.545553 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 05:20:05.555386 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 05:20:05.560556 containerd[1571]: time="2025-07-15T05:20:05.557487285Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.34µs" Jul 15 05:20:05.560556 containerd[1571]: time="2025-07-15T05:20:05.559787438Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 05:20:05.560556 containerd[1571]: time="2025-07-15T05:20:05.559807849Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 05:20:05.560556 containerd[1571]: time="2025-07-15T05:20:05.559956239Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 05:20:05.560556 containerd[1571]: time="2025-07-15T05:20:05.559971409Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 05:20:05.560556 containerd[1571]: time="2025-07-15T05:20:05.559993519Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:20:05.560556 containerd[1571]: time="2025-07-15T05:20:05.560056749Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:20:05.560556 containerd[1571]: time="2025-07-15T05:20:05.560068179Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:20:05.560556 containerd[1571]: time="2025-07-15T05:20:05.560280549Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:20:05.560556 containerd[1571]: time="2025-07-15T05:20:05.560293269Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:20:05.560556 containerd[1571]: time="2025-07-15T05:20:05.560303669Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:20:05.560556 containerd[1571]: time="2025-07-15T05:20:05.560312459Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 05:20:05.560770 containerd[1571]: time="2025-07-15T05:20:05.560396489Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 05:20:05.560986 containerd[1571]: time="2025-07-15T05:20:05.560967860Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:20:05.561059 containerd[1571]: time="2025-07-15T05:20:05.561044040Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:20:05.561109 containerd[1571]: time="2025-07-15T05:20:05.561097300Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 05:20:05.561662 containerd[1571]: time="2025-07-15T05:20:05.561465611Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 05:20:05.561831 containerd[1571]: time="2025-07-15T05:20:05.561815272Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 05:20:05.564260 containerd[1571]: time="2025-07-15T05:20:05.563490264Z" level=info msg="metadata content store policy set" policy=shared Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566561159Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566609499Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566629459Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566640049Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566684089Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566696469Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566709609Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566720229Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566730069Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566739659Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566747919Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566759569Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566871779Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 05:20:05.567128 containerd[1571]: time="2025-07-15T05:20:05.566891969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 05:20:05.567387 containerd[1571]: time="2025-07-15T05:20:05.566913509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 05:20:05.567387 containerd[1571]: time="2025-07-15T05:20:05.566924599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 05:20:05.567387 containerd[1571]: time="2025-07-15T05:20:05.566935189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 05:20:05.567387 containerd[1571]: time="2025-07-15T05:20:05.566946509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 05:20:05.567387 containerd[1571]: time="2025-07-15T05:20:05.566956909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 05:20:05.567387 containerd[1571]: time="2025-07-15T05:20:05.566967529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 05:20:05.567387 containerd[1571]: time="2025-07-15T05:20:05.566978489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 05:20:05.567387 containerd[1571]: time="2025-07-15T05:20:05.566993439Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 05:20:05.567387 containerd[1571]: time="2025-07-15T05:20:05.567006349Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 05:20:05.567387 containerd[1571]: time="2025-07-15T05:20:05.567066489Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 05:20:05.567387 containerd[1571]: time="2025-07-15T05:20:05.567079749Z" level=info msg="Start snapshots syncer" Jul 15 05:20:05.568532 containerd[1571]: time="2025-07-15T05:20:05.567690440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 05:20:05.568532 containerd[1571]: time="2025-07-15T05:20:05.568023481Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 05:20:05.568657 containerd[1571]: time="2025-07-15T05:20:05.568231901Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 05:20:05.571198 containerd[1571]: time="2025-07-15T05:20:05.571176856Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 05:20:05.573303 containerd[1571]: time="2025-07-15T05:20:05.573258629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 05:20:05.573353 containerd[1571]: time="2025-07-15T05:20:05.573318899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 05:20:05.573353 containerd[1571]: time="2025-07-15T05:20:05.573338869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 05:20:05.573392 containerd[1571]: time="2025-07-15T05:20:05.573353379Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 05:20:05.573392 containerd[1571]: time="2025-07-15T05:20:05.573372549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 05:20:05.573392 containerd[1571]: time="2025-07-15T05:20:05.573388859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 05:20:05.573445 containerd[1571]: time="2025-07-15T05:20:05.573404509Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 05:20:05.573445 containerd[1571]: time="2025-07-15T05:20:05.573439609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 05:20:05.573482 containerd[1571]: time="2025-07-15T05:20:05.573455059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 05:20:05.573482 containerd[1571]: time="2025-07-15T05:20:05.573468269Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 05:20:05.573561 containerd[1571]: time="2025-07-15T05:20:05.573535059Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:20:05.573582 containerd[1571]: time="2025-07-15T05:20:05.573565229Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:20:05.573582 containerd[1571]: time="2025-07-15T05:20:05.573575519Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:20:05.573616 containerd[1571]: time="2025-07-15T05:20:05.573587819Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:20:05.573616 containerd[1571]: time="2025-07-15T05:20:05.573598439Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 05:20:05.573616 containerd[1571]: time="2025-07-15T05:20:05.573613569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 05:20:05.573674 containerd[1571]: time="2025-07-15T05:20:05.573626999Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 05:20:05.573674 containerd[1571]: time="2025-07-15T05:20:05.573649269Z" level=info msg="runtime interface created" Jul 15 05:20:05.573674 containerd[1571]: time="2025-07-15T05:20:05.573655579Z" level=info msg="created NRI interface" Jul 15 05:20:05.573674 containerd[1571]: time="2025-07-15T05:20:05.573667459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 05:20:05.573750 containerd[1571]: time="2025-07-15T05:20:05.573682559Z" level=info msg="Connect containerd service" Jul 15 05:20:05.573750 containerd[1571]: time="2025-07-15T05:20:05.573723279Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 05:20:05.577439 locksmithd[1582]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 05:20:05.578238 containerd[1571]: time="2025-07-15T05:20:05.577682905Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:20:05.593601 coreos-metadata[1605]: Jul 15 05:20:05.593 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jul 15 05:20:05.651777 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 05:20:05.683785 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 05:20:05.688594 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 05:20:05.701630 polkitd[1618]: Started polkitd version 126 Jul 15 05:20:05.710613 polkitd[1618]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 05:20:05.712470 polkitd[1618]: Loading rules from directory /run/polkit-1/rules.d Jul 15 05:20:05.712593 polkitd[1618]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 05:20:05.712846 polkitd[1618]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 05:20:05.715306 polkitd[1618]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 05:20:05.715406 polkitd[1618]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 05:20:05.717475 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 05:20:05.718709 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 05:20:05.718994 polkitd[1618]: Finished loading, compiling and executing 2 rules Jul 15 05:20:05.719896 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 05:20:05.720398 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 05:20:05.723477 polkitd[1618]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 05:20:05.725041 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 05:20:05.727819 coreos-metadata[1605]: Jul 15 05:20:05.727 INFO Fetch successful Jul 15 05:20:05.752929 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 05:20:05.755983 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 05:20:05.761811 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 05:20:05.762830 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 05:20:05.767086 update-ssh-keys[1654]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:20:05.769297 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 05:20:05.774292 systemd[1]: Finished sshkeys.service. Jul 15 05:20:05.775890 containerd[1571]: time="2025-07-15T05:20:05.775851083Z" level=info msg="Start subscribing containerd event" Jul 15 05:20:05.776179 containerd[1571]: time="2025-07-15T05:20:05.776138703Z" level=info msg="Start recovering state" Jul 15 05:20:05.776643 containerd[1571]: time="2025-07-15T05:20:05.776617054Z" level=info msg="Start event monitor" Jul 15 05:20:05.776643 containerd[1571]: time="2025-07-15T05:20:05.776642664Z" level=info msg="Start cni network conf syncer for default" Jul 15 05:20:05.776695 containerd[1571]: time="2025-07-15T05:20:05.776652994Z" level=info msg="Start streaming server" Jul 15 05:20:05.777021 containerd[1571]: time="2025-07-15T05:20:05.776995704Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 05:20:05.777021 containerd[1571]: time="2025-07-15T05:20:05.777015594Z" level=info msg="runtime interface starting up..." Jul 15 05:20:05.777061 containerd[1571]: time="2025-07-15T05:20:05.777024084Z" level=info msg="starting plugins..." Jul 15 05:20:05.777061 containerd[1571]: time="2025-07-15T05:20:05.777041284Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 05:20:05.778154 systemd-resolved[1453]: System hostname changed to '172-237-133-19'. Jul 15 05:20:05.778230 systemd-hostnamed[1609]: Hostname set to <172-237-133-19> (transient) Jul 15 05:20:05.778439 containerd[1571]: time="2025-07-15T05:20:05.778332016Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 05:20:05.778842 containerd[1571]: time="2025-07-15T05:20:05.778817117Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 05:20:05.778942 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 05:20:05.779576 containerd[1571]: time="2025-07-15T05:20:05.779548828Z" level=info msg="containerd successfully booted in 0.258894s" Jul 15 05:20:05.843262 tar[1559]: linux-amd64/LICENSE Jul 15 05:20:05.843396 tar[1559]: linux-amd64/README.md Jul 15 05:20:05.858801 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 05:20:06.063572 coreos-metadata[1529]: Jul 15 05:20:06.063 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jul 15 05:20:06.149135 coreos-metadata[1529]: Jul 15 05:20:06.149 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jul 15 05:20:06.330656 coreos-metadata[1529]: Jul 15 05:20:06.330 INFO Fetch successful Jul 15 05:20:06.330791 coreos-metadata[1529]: Jul 15 05:20:06.330 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jul 15 05:20:06.594322 coreos-metadata[1529]: Jul 15 05:20:06.594 INFO Fetch successful Jul 15 05:20:06.706795 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 05:20:06.707811 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 05:20:06.812692 systemd-networkd[1452]: eth0: Gained IPv6LL Jul 15 05:20:06.815162 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 05:20:06.816170 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 05:20:06.819649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:06.823408 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 05:20:06.849437 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 05:20:07.707914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:07.709229 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 05:20:07.711575 systemd[1]: Startup finished in 2.551s (kernel) + 7.789s (initrd) + 5.354s (userspace) = 15.696s. Jul 15 05:20:07.717776 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:20:08.205950 kubelet[1702]: E0715 05:20:08.205795 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:20:08.209537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:20:08.209739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:20:08.210252 systemd[1]: kubelet.service: Consumed 863ms CPU time, 262.4M memory peak. Jul 15 05:20:08.392979 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 05:20:08.394290 systemd[1]: Started sshd@0-172.237.133.19:22-139.178.68.195:42772.service - OpenSSH per-connection server daemon (139.178.68.195:42772). Jul 15 05:20:08.745970 sshd[1714]: Accepted publickey for core from 139.178.68.195 port 42772 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:20:08.748489 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:08.755347 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 05:20:08.756925 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 05:20:08.764881 systemd-logind[1541]: New session 1 of user core. Jul 15 05:20:08.776311 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 05:20:08.779585 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 05:20:08.789775 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 05:20:08.792702 systemd-logind[1541]: New session c1 of user core. Jul 15 05:20:08.928948 systemd[1719]: Queued start job for default target default.target. Jul 15 05:20:08.935694 systemd[1719]: Created slice app.slice - User Application Slice. Jul 15 05:20:08.935721 systemd[1719]: Reached target paths.target - Paths. Jul 15 05:20:08.935764 systemd[1719]: Reached target timers.target - Timers. Jul 15 05:20:08.937195 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 05:20:08.949698 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 05:20:08.949916 systemd[1719]: Reached target sockets.target - Sockets. Jul 15 05:20:08.950033 systemd[1719]: Reached target basic.target - Basic System. Jul 15 05:20:08.950133 systemd[1719]: Reached target default.target - Main User Target. Jul 15 05:20:08.950222 systemd[1719]: Startup finished in 151ms. Jul 15 05:20:08.950234 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 05:20:08.958639 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 05:20:09.211939 systemd[1]: Started sshd@1-172.237.133.19:22-139.178.68.195:42780.service - OpenSSH per-connection server daemon (139.178.68.195:42780). Jul 15 05:20:09.547765 sshd[1730]: Accepted publickey for core from 139.178.68.195 port 42780 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:20:09.549138 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:09.554227 systemd-logind[1541]: New session 2 of user core. Jul 15 05:20:09.556609 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 05:20:09.791091 sshd[1733]: Connection closed by 139.178.68.195 port 42780 Jul 15 05:20:09.792078 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:09.797306 systemd[1]: sshd@1-172.237.133.19:22-139.178.68.195:42780.service: Deactivated successfully. Jul 15 05:20:09.799718 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 05:20:09.800926 systemd-logind[1541]: Session 2 logged out. Waiting for processes to exit. Jul 15 05:20:09.803164 systemd-logind[1541]: Removed session 2. Jul 15 05:20:09.865962 systemd[1]: Started sshd@2-172.237.133.19:22-139.178.68.195:42792.service - OpenSSH per-connection server daemon (139.178.68.195:42792). Jul 15 05:20:10.207634 sshd[1739]: Accepted publickey for core from 139.178.68.195 port 42792 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:20:10.209308 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:10.213840 systemd-logind[1541]: New session 3 of user core. Jul 15 05:20:10.226666 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 05:20:10.455659 sshd[1742]: Connection closed by 139.178.68.195 port 42792 Jul 15 05:20:10.456161 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:10.461821 systemd[1]: sshd@2-172.237.133.19:22-139.178.68.195:42792.service: Deactivated successfully. Jul 15 05:20:10.466599 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 05:20:10.468542 systemd-logind[1541]: Session 3 logged out. Waiting for processes to exit. Jul 15 05:20:10.470349 systemd-logind[1541]: Removed session 3. Jul 15 05:20:10.513064 systemd[1]: Started sshd@3-172.237.133.19:22-139.178.68.195:56168.service - OpenSSH per-connection server daemon (139.178.68.195:56168). Jul 15 05:20:10.842939 sshd[1748]: Accepted publickey for core from 139.178.68.195 port 56168 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:20:10.844667 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:10.849762 systemd-logind[1541]: New session 4 of user core. Jul 15 05:20:10.856705 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 05:20:11.083981 sshd[1751]: Connection closed by 139.178.68.195 port 56168 Jul 15 05:20:11.084570 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:11.088433 systemd-logind[1541]: Session 4 logged out. Waiting for processes to exit. Jul 15 05:20:11.089098 systemd[1]: sshd@3-172.237.133.19:22-139.178.68.195:56168.service: Deactivated successfully. Jul 15 05:20:11.091189 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 05:20:11.093033 systemd-logind[1541]: Removed session 4. Jul 15 05:20:11.143412 systemd[1]: Started sshd@4-172.237.133.19:22-139.178.68.195:56182.service - OpenSSH per-connection server daemon (139.178.68.195:56182). Jul 15 05:20:11.476537 sshd[1757]: Accepted publickey for core from 139.178.68.195 port 56182 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:20:11.478142 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:11.481958 systemd-logind[1541]: New session 5 of user core. Jul 15 05:20:11.490621 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 05:20:11.679056 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 05:20:11.679343 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:20:11.696772 sudo[1761]: pam_unix(sudo:session): session closed for user root Jul 15 05:20:11.746371 sshd[1760]: Connection closed by 139.178.68.195 port 56182 Jul 15 05:20:11.747357 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:11.751259 systemd[1]: sshd@4-172.237.133.19:22-139.178.68.195:56182.service: Deactivated successfully. Jul 15 05:20:11.753157 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 05:20:11.754275 systemd-logind[1541]: Session 5 logged out. Waiting for processes to exit. Jul 15 05:20:11.755315 systemd-logind[1541]: Removed session 5. Jul 15 05:20:11.808645 systemd[1]: Started sshd@5-172.237.133.19:22-139.178.68.195:56186.service - OpenSSH per-connection server daemon (139.178.68.195:56186). Jul 15 05:20:12.154275 sshd[1767]: Accepted publickey for core from 139.178.68.195 port 56186 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:20:12.155934 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:12.161630 systemd-logind[1541]: New session 6 of user core. Jul 15 05:20:12.172634 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 05:20:12.354196 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 05:20:12.354546 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:20:12.359803 sudo[1772]: pam_unix(sudo:session): session closed for user root Jul 15 05:20:12.366094 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 05:20:12.366425 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:20:12.377438 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:20:12.415118 augenrules[1794]: No rules Jul 15 05:20:12.417219 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:20:12.417460 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:20:12.421874 sudo[1771]: pam_unix(sudo:session): session closed for user root Jul 15 05:20:12.473446 sshd[1770]: Connection closed by 139.178.68.195 port 56186 Jul 15 05:20:12.474308 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:12.479253 systemd[1]: sshd@5-172.237.133.19:22-139.178.68.195:56186.service: Deactivated successfully. Jul 15 05:20:12.481721 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 05:20:12.482538 systemd-logind[1541]: Session 6 logged out. Waiting for processes to exit. Jul 15 05:20:12.484676 systemd-logind[1541]: Removed session 6. Jul 15 05:20:12.531654 systemd[1]: Started sshd@6-172.237.133.19:22-139.178.68.195:56194.service - OpenSSH per-connection server daemon (139.178.68.195:56194). Jul 15 05:20:12.869713 sshd[1803]: Accepted publickey for core from 139.178.68.195 port 56194 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:20:12.871066 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:12.875922 systemd-logind[1541]: New session 7 of user core. Jul 15 05:20:12.881623 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 05:20:13.066303 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 05:20:13.066690 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:20:13.336670 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 05:20:13.358806 (dockerd)[1825]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 05:20:13.555111 dockerd[1825]: time="2025-07-15T05:20:13.554821386Z" level=info msg="Starting up" Jul 15 05:20:13.556773 dockerd[1825]: time="2025-07-15T05:20:13.556710325Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 05:20:13.568871 dockerd[1825]: time="2025-07-15T05:20:13.568828565Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 05:20:13.610755 dockerd[1825]: time="2025-07-15T05:20:13.610651267Z" level=info msg="Loading containers: start." Jul 15 05:20:13.621619 kernel: Initializing XFRM netlink socket Jul 15 05:20:13.863914 systemd-networkd[1452]: docker0: Link UP Jul 15 05:20:13.868837 dockerd[1825]: time="2025-07-15T05:20:13.868206555Z" level=info msg="Loading containers: done." Jul 15 05:20:13.882224 dockerd[1825]: time="2025-07-15T05:20:13.882169186Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 05:20:13.882344 dockerd[1825]: time="2025-07-15T05:20:13.882284054Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 05:20:13.883067 dockerd[1825]: time="2025-07-15T05:20:13.882383790Z" level=info msg="Initializing buildkit" Jul 15 05:20:13.904271 dockerd[1825]: time="2025-07-15T05:20:13.904242800Z" level=info msg="Completed buildkit initialization" Jul 15 05:20:13.913612 dockerd[1825]: time="2025-07-15T05:20:13.913587730Z" level=info msg="Daemon has completed initialization" Jul 15 05:20:13.913734 dockerd[1825]: time="2025-07-15T05:20:13.913702378Z" level=info msg="API listen on /run/docker.sock" Jul 15 05:20:13.913798 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 05:20:14.439899 containerd[1571]: time="2025-07-15T05:20:14.439618693Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 05:20:15.348340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1073573507.mount: Deactivated successfully. Jul 15 05:20:16.553526 containerd[1571]: time="2025-07-15T05:20:16.552117031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:16.553526 containerd[1571]: time="2025-07-15T05:20:16.553198840Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 15 05:20:16.553526 containerd[1571]: time="2025-07-15T05:20:16.553437398Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:16.556075 containerd[1571]: time="2025-07-15T05:20:16.555892272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:16.556574 containerd[1571]: time="2025-07-15T05:20:16.556539920Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.116886475s" Jul 15 05:20:16.556613 containerd[1571]: time="2025-07-15T05:20:16.556578491Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 15 05:20:16.561271 containerd[1571]: time="2025-07-15T05:20:16.561247915Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 05:20:18.128221 containerd[1571]: time="2025-07-15T05:20:18.128147218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:18.129185 containerd[1571]: time="2025-07-15T05:20:18.129102156Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 15 05:20:18.129707 containerd[1571]: time="2025-07-15T05:20:18.129663178Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:18.131853 containerd[1571]: time="2025-07-15T05:20:18.131815448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:18.133059 containerd[1571]: time="2025-07-15T05:20:18.132733823Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.571401245s" Jul 15 05:20:18.133059 containerd[1571]: time="2025-07-15T05:20:18.132769645Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 15 05:20:18.134397 containerd[1571]: time="2025-07-15T05:20:18.134359486Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 05:20:18.460132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 05:20:18.462891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:18.639408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:18.650775 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:20:18.696824 kubelet[2103]: E0715 05:20:18.696754 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:20:18.702567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:20:18.702769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:20:18.703330 systemd[1]: kubelet.service: Consumed 199ms CPU time, 110.9M memory peak. Jul 15 05:20:19.394289 containerd[1571]: time="2025-07-15T05:20:19.393482516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:19.394735 containerd[1571]: time="2025-07-15T05:20:19.394374022Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 15 05:20:19.395106 containerd[1571]: time="2025-07-15T05:20:19.395066666Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:19.396843 containerd[1571]: time="2025-07-15T05:20:19.396796494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:19.398157 containerd[1571]: time="2025-07-15T05:20:19.398104350Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.263712966s" Jul 15 05:20:19.398157 containerd[1571]: time="2025-07-15T05:20:19.398141303Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 15 05:20:19.399063 containerd[1571]: time="2025-07-15T05:20:19.399022861Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 05:20:20.635870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2623871836.mount: Deactivated successfully. Jul 15 05:20:20.971208 containerd[1571]: time="2025-07-15T05:20:20.970477497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:20.971208 containerd[1571]: time="2025-07-15T05:20:20.971101506Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 15 05:20:20.972319 containerd[1571]: time="2025-07-15T05:20:20.971609370Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:20.973449 containerd[1571]: time="2025-07-15T05:20:20.973402280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:20.973876 containerd[1571]: time="2025-07-15T05:20:20.973833386Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.574780631s" Jul 15 05:20:20.973918 containerd[1571]: time="2025-07-15T05:20:20.973876125Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 15 05:20:20.974877 containerd[1571]: time="2025-07-15T05:20:20.974836190Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 05:20:21.763445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4273002980.mount: Deactivated successfully. Jul 15 05:20:22.405252 containerd[1571]: time="2025-07-15T05:20:22.405198298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:22.406267 containerd[1571]: time="2025-07-15T05:20:22.406126402Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 15 05:20:22.406784 containerd[1571]: time="2025-07-15T05:20:22.406752548Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:22.409085 containerd[1571]: time="2025-07-15T05:20:22.409051412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:22.409933 containerd[1571]: time="2025-07-15T05:20:22.409897181Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.435031255s" Jul 15 05:20:22.409933 containerd[1571]: time="2025-07-15T05:20:22.409932435Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 05:20:22.412936 containerd[1571]: time="2025-07-15T05:20:22.412913467Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 05:20:23.096418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1400529185.mount: Deactivated successfully. Jul 15 05:20:23.101538 containerd[1571]: time="2025-07-15T05:20:23.100980835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:20:23.101949 containerd[1571]: time="2025-07-15T05:20:23.101911746Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 05:20:23.102802 containerd[1571]: time="2025-07-15T05:20:23.102752764Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:20:23.104532 containerd[1571]: time="2025-07-15T05:20:23.104488610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:20:23.105095 containerd[1571]: time="2025-07-15T05:20:23.105071368Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 692.130389ms" Jul 15 05:20:23.105224 containerd[1571]: time="2025-07-15T05:20:23.105208735Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 05:20:23.105848 containerd[1571]: time="2025-07-15T05:20:23.105810614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 05:20:23.826790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157805813.mount: Deactivated successfully. Jul 15 05:20:28.140214 containerd[1571]: time="2025-07-15T05:20:28.140156718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:28.141597 containerd[1571]: time="2025-07-15T05:20:28.141059228Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 15 05:20:28.142119 containerd[1571]: time="2025-07-15T05:20:28.142082723Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:28.143937 containerd[1571]: time="2025-07-15T05:20:28.143883296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:28.144731 containerd[1571]: time="2025-07-15T05:20:28.144692941Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.038850785s" Jul 15 05:20:28.145075 containerd[1571]: time="2025-07-15T05:20:28.144736001Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 15 05:20:28.770971 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 05:20:28.774631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:28.948609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:28.955754 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:20:28.988327 kubelet[2260]: E0715 05:20:28.988207 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:20:28.993490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:20:28.993799 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:20:28.994403 systemd[1]: kubelet.service: Consumed 176ms CPU time, 110.3M memory peak. Jul 15 05:20:30.023425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:30.023813 systemd[1]: kubelet.service: Consumed 176ms CPU time, 110.3M memory peak. Jul 15 05:20:30.027651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:30.057261 systemd[1]: Reload requested from client PID 2274 ('systemctl') (unit session-7.scope)... Jul 15 05:20:30.057378 systemd[1]: Reloading... Jul 15 05:20:30.194547 zram_generator::config[2318]: No configuration found. Jul 15 05:20:30.291900 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:20:30.395744 systemd[1]: Reloading finished in 335 ms. Jul 15 05:20:30.451235 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 05:20:30.451350 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 05:20:30.451666 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:30.451711 systemd[1]: kubelet.service: Consumed 150ms CPU time, 98.3M memory peak. Jul 15 05:20:30.453181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:30.622989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:30.629906 (kubelet)[2372]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:20:30.667553 kubelet[2372]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:20:30.667553 kubelet[2372]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 05:20:30.667553 kubelet[2372]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:20:30.667553 kubelet[2372]: I0715 05:20:30.667100 2372 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:20:31.089153 kubelet[2372]: I0715 05:20:31.089113 2372 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 05:20:31.089153 kubelet[2372]: I0715 05:20:31.089140 2372 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:20:31.089382 kubelet[2372]: I0715 05:20:31.089351 2372 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 05:20:31.112519 kubelet[2372]: E0715 05:20:31.111072 2372 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.237.133.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.133.19:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:31.112519 kubelet[2372]: I0715 05:20:31.112435 2372 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:20:31.123198 kubelet[2372]: I0715 05:20:31.123169 2372 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:20:31.127871 kubelet[2372]: I0715 05:20:31.127847 2372 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:20:31.128575 kubelet[2372]: I0715 05:20:31.128546 2372 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 05:20:31.128747 kubelet[2372]: I0715 05:20:31.128709 2372 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:20:31.128922 kubelet[2372]: I0715 05:20:31.128737 2372 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-133-19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:20:31.129014 kubelet[2372]: I0715 05:20:31.128927 2372 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:20:31.129014 kubelet[2372]: I0715 05:20:31.128936 2372 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 05:20:31.129058 kubelet[2372]: I0715 05:20:31.129032 2372 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:20:31.131565 kubelet[2372]: I0715 05:20:31.131373 2372 kubelet.go:408] "Attempting to sync node with API server" Jul 15 05:20:31.131565 kubelet[2372]: I0715 05:20:31.131393 2372 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:20:31.131565 kubelet[2372]: I0715 05:20:31.131424 2372 kubelet.go:314] "Adding apiserver pod source" Jul 15 05:20:31.131565 kubelet[2372]: I0715 05:20:31.131443 2372 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:20:31.134358 kubelet[2372]: W0715 05:20:31.134326 2372 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.133.19:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-133-19&limit=500&resourceVersion=0": dial tcp 172.237.133.19:6443: connect: connection refused Jul 15 05:20:31.134437 kubelet[2372]: E0715 05:20:31.134418 2372 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.133.19:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-133-19&limit=500&resourceVersion=0\": dial tcp 172.237.133.19:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:31.134927 kubelet[2372]: W0715 05:20:31.134854 2372 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.133.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.133.19:6443: connect: connection refused Jul 15 05:20:31.134927 kubelet[2372]: E0715 05:20:31.134890 2372 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.133.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.133.19:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:31.135048 kubelet[2372]: I0715 05:20:31.135035 2372 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:20:31.135430 kubelet[2372]: I0715 05:20:31.135417 2372 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:20:31.136082 kubelet[2372]: W0715 05:20:31.136067 2372 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 05:20:31.138640 kubelet[2372]: I0715 05:20:31.138625 2372 server.go:1274] "Started kubelet" Jul 15 05:20:31.139865 kubelet[2372]: I0715 05:20:31.139851 2372 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:20:31.142606 kubelet[2372]: E0715 05:20:31.141526 2372 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.133.19:6443/api/v1/namespaces/default/events\": dial tcp 172.237.133.19:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-133-19.1852552708b34f27 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-133-19,UID:172-237-133-19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-133-19,},FirstTimestamp:2025-07-15 05:20:31.138598695 +0000 UTC m=+0.505484768,LastTimestamp:2025-07-15 05:20:31.138598695 +0000 UTC m=+0.505484768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-133-19,}" Jul 15 05:20:31.146212 kubelet[2372]: I0715 05:20:31.146187 2372 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:20:31.146907 kubelet[2372]: I0715 05:20:31.146876 2372 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 05:20:31.147119 kubelet[2372]: I0715 05:20:31.147106 2372 server.go:449] "Adding debug handlers to kubelet server" Jul 15 05:20:31.147281 kubelet[2372]: E0715 05:20:31.147253 2372 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-237-133-19\" not found" Jul 15 05:20:31.149971 kubelet[2372]: I0715 05:20:31.148958 2372 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:20:31.149971 kubelet[2372]: I0715 05:20:31.149136 2372 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:20:31.149971 kubelet[2372]: I0715 05:20:31.149284 2372 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:20:31.149971 kubelet[2372]: E0715 05:20:31.149540 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.133.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-133-19?timeout=10s\": dial tcp 172.237.133.19:6443: connect: connection refused" interval="200ms" Jul 15 05:20:31.149971 kubelet[2372]: I0715 05:20:31.149682 2372 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:20:31.149971 kubelet[2372]: I0715 05:20:31.149709 2372 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 05:20:31.149971 kubelet[2372]: W0715 05:20:31.149917 2372 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.133.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.133.19:6443: connect: connection refused Jul 15 05:20:31.149971 kubelet[2372]: E0715 05:20:31.149945 2372 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.133.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.133.19:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:31.150974 kubelet[2372]: I0715 05:20:31.150958 2372 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:20:31.151100 kubelet[2372]: E0715 05:20:31.151086 2372 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:20:31.151234 kubelet[2372]: I0715 05:20:31.151218 2372 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:20:31.152473 kubelet[2372]: I0715 05:20:31.152458 2372 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:20:31.169264 kubelet[2372]: I0715 05:20:31.169237 2372 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:20:31.173700 kubelet[2372]: I0715 05:20:31.173683 2372 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:20:31.173771 kubelet[2372]: I0715 05:20:31.173761 2372 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 05:20:31.173830 kubelet[2372]: I0715 05:20:31.173821 2372 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 05:20:31.173934 kubelet[2372]: E0715 05:20:31.173917 2372 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:20:31.175092 kubelet[2372]: W0715 05:20:31.175058 2372 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.133.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.133.19:6443: connect: connection refused Jul 15 05:20:31.175138 kubelet[2372]: E0715 05:20:31.175093 2372 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.133.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.133.19:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:31.176008 kubelet[2372]: I0715 05:20:31.175990 2372 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 05:20:31.176094 kubelet[2372]: I0715 05:20:31.176084 2372 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 05:20:31.176145 kubelet[2372]: I0715 05:20:31.176137 2372 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:20:31.178007 kubelet[2372]: I0715 05:20:31.177994 2372 policy_none.go:49] "None policy: Start" Jul 15 05:20:31.178670 kubelet[2372]: I0715 05:20:31.178657 2372 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 05:20:31.178783 kubelet[2372]: I0715 05:20:31.178773 2372 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:20:31.184579 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 05:20:31.203883 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 05:20:31.207530 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 05:20:31.216353 kubelet[2372]: I0715 05:20:31.216316 2372 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:20:31.216579 kubelet[2372]: I0715 05:20:31.216549 2372 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:20:31.216612 kubelet[2372]: I0715 05:20:31.216566 2372 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:20:31.216922 kubelet[2372]: I0715 05:20:31.216904 2372 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:20:31.219752 kubelet[2372]: E0715 05:20:31.219659 2372 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-133-19\" not found" Jul 15 05:20:31.284654 systemd[1]: Created slice kubepods-burstable-podfceec3381fed0035e1bb17f6a2025e9a.slice - libcontainer container kubepods-burstable-podfceec3381fed0035e1bb17f6a2025e9a.slice. Jul 15 05:20:31.295913 systemd[1]: Created slice kubepods-burstable-pod951d8873ca793ce5202651ffc5533416.slice - libcontainer container kubepods-burstable-pod951d8873ca793ce5202651ffc5533416.slice. Jul 15 05:20:31.300332 systemd[1]: Created slice kubepods-burstable-podad6d5674b800fccf7124f93fd4b50c74.slice - libcontainer container kubepods-burstable-podad6d5674b800fccf7124f93fd4b50c74.slice. Jul 15 05:20:31.318170 kubelet[2372]: I0715 05:20:31.318143 2372 kubelet_node_status.go:72] "Attempting to register node" node="172-237-133-19" Jul 15 05:20:31.318467 kubelet[2372]: E0715 05:20:31.318422 2372 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.237.133.19:6443/api/v1/nodes\": dial tcp 172.237.133.19:6443: connect: connection refused" node="172-237-133-19" Jul 15 05:20:31.350938 kubelet[2372]: E0715 05:20:31.350848 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.133.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-133-19?timeout=10s\": dial tcp 172.237.133.19:6443: connect: connection refused" interval="400ms" Jul 15 05:20:31.451255 kubelet[2372]: I0715 05:20:31.451203 2372 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fceec3381fed0035e1bb17f6a2025e9a-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-133-19\" (UID: \"fceec3381fed0035e1bb17f6a2025e9a\") " pod="kube-system/kube-apiserver-172-237-133-19" Jul 15 05:20:31.451255 kubelet[2372]: I0715 05:20:31.451229 2372 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad6d5674b800fccf7124f93fd4b50c74-ca-certs\") pod \"kube-controller-manager-172-237-133-19\" (UID: \"ad6d5674b800fccf7124f93fd4b50c74\") " pod="kube-system/kube-controller-manager-172-237-133-19" Jul 15 05:20:31.451255 kubelet[2372]: I0715 05:20:31.451248 2372 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad6d5674b800fccf7124f93fd4b50c74-k8s-certs\") pod \"kube-controller-manager-172-237-133-19\" (UID: \"ad6d5674b800fccf7124f93fd4b50c74\") " pod="kube-system/kube-controller-manager-172-237-133-19" Jul 15 05:20:31.451255 kubelet[2372]: I0715 05:20:31.451266 2372 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad6d5674b800fccf7124f93fd4b50c74-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-133-19\" (UID: \"ad6d5674b800fccf7124f93fd4b50c74\") " pod="kube-system/kube-controller-manager-172-237-133-19" Jul 15 05:20:31.451609 kubelet[2372]: I0715 05:20:31.451283 2372 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/951d8873ca793ce5202651ffc5533416-kubeconfig\") pod \"kube-scheduler-172-237-133-19\" (UID: \"951d8873ca793ce5202651ffc5533416\") " pod="kube-system/kube-scheduler-172-237-133-19" Jul 15 05:20:31.451609 kubelet[2372]: I0715 05:20:31.451296 2372 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fceec3381fed0035e1bb17f6a2025e9a-ca-certs\") pod \"kube-apiserver-172-237-133-19\" (UID: \"fceec3381fed0035e1bb17f6a2025e9a\") " pod="kube-system/kube-apiserver-172-237-133-19" Jul 15 05:20:31.451609 kubelet[2372]: I0715 05:20:31.451309 2372 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fceec3381fed0035e1bb17f6a2025e9a-k8s-certs\") pod \"kube-apiserver-172-237-133-19\" (UID: \"fceec3381fed0035e1bb17f6a2025e9a\") " pod="kube-system/kube-apiserver-172-237-133-19" Jul 15 05:20:31.451609 kubelet[2372]: I0715 05:20:31.451322 2372 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad6d5674b800fccf7124f93fd4b50c74-flexvolume-dir\") pod \"kube-controller-manager-172-237-133-19\" (UID: \"ad6d5674b800fccf7124f93fd4b50c74\") " pod="kube-system/kube-controller-manager-172-237-133-19" Jul 15 05:20:31.451609 kubelet[2372]: I0715 05:20:31.451337 2372 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad6d5674b800fccf7124f93fd4b50c74-kubeconfig\") pod \"kube-controller-manager-172-237-133-19\" (UID: \"ad6d5674b800fccf7124f93fd4b50c74\") " pod="kube-system/kube-controller-manager-172-237-133-19" Jul 15 05:20:31.521115 kubelet[2372]: I0715 05:20:31.520844 2372 kubelet_node_status.go:72] "Attempting to register node" node="172-237-133-19" Jul 15 05:20:31.521249 kubelet[2372]: E0715 05:20:31.521211 2372 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.237.133.19:6443/api/v1/nodes\": dial tcp 172.237.133.19:6443: connect: connection refused" node="172-237-133-19" Jul 15 05:20:31.593001 kubelet[2372]: E0715 05:20:31.592962 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:31.593621 containerd[1571]: time="2025-07-15T05:20:31.593566750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-133-19,Uid:fceec3381fed0035e1bb17f6a2025e9a,Namespace:kube-system,Attempt:0,}" Jul 15 05:20:31.599769 kubelet[2372]: E0715 05:20:31.599743 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:31.600310 containerd[1571]: time="2025-07-15T05:20:31.600165207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-133-19,Uid:951d8873ca793ce5202651ffc5533416,Namespace:kube-system,Attempt:0,}" Jul 15 05:20:31.602966 kubelet[2372]: E0715 05:20:31.602657 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:31.603332 containerd[1571]: time="2025-07-15T05:20:31.603294468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-133-19,Uid:ad6d5674b800fccf7124f93fd4b50c74,Namespace:kube-system,Attempt:0,}" Jul 15 05:20:31.617862 containerd[1571]: time="2025-07-15T05:20:31.617675610Z" level=info msg="connecting to shim e1ff0e51397f516ebe4b240172c6d1d56e998a7262f2a17c2371df8ad367c9d9" address="unix:///run/containerd/s/06c688e3021429fe436d9786b81f1b83e9254b1a2cc4abf05577ebbf3be12801" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:20:31.641704 containerd[1571]: time="2025-07-15T05:20:31.641679799Z" level=info msg="connecting to shim ca0f91ac758b6af7ae7f02fa3a2eecb4fad8556f71f26268465400e0778096df" address="unix:///run/containerd/s/96bb6338ab6c2a08b54496b5fbc64f38397866c82f83ec0dc702cee366afa0fc" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:20:31.650606 containerd[1571]: time="2025-07-15T05:20:31.650545622Z" level=info msg="connecting to shim e7e10d15ae740c473e6cd3d69d3f2954d8901dd93e5ff081afbc59b82f814a55" address="unix:///run/containerd/s/b5d1870bb3696d1d3480d961839e8d7600a421abd9a60bc0f9084cae8791e2a9" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:20:31.664628 systemd[1]: Started cri-containerd-e1ff0e51397f516ebe4b240172c6d1d56e998a7262f2a17c2371df8ad367c9d9.scope - libcontainer container e1ff0e51397f516ebe4b240172c6d1d56e998a7262f2a17c2371df8ad367c9d9. Jul 15 05:20:31.688640 systemd[1]: Started cri-containerd-e7e10d15ae740c473e6cd3d69d3f2954d8901dd93e5ff081afbc59b82f814a55.scope - libcontainer container e7e10d15ae740c473e6cd3d69d3f2954d8901dd93e5ff081afbc59b82f814a55. Jul 15 05:20:31.695544 systemd[1]: Started cri-containerd-ca0f91ac758b6af7ae7f02fa3a2eecb4fad8556f71f26268465400e0778096df.scope - libcontainer container ca0f91ac758b6af7ae7f02fa3a2eecb4fad8556f71f26268465400e0778096df. Jul 15 05:20:31.751936 kubelet[2372]: E0715 05:20:31.751905 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.133.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-133-19?timeout=10s\": dial tcp 172.237.133.19:6443: connect: connection refused" interval="800ms" Jul 15 05:20:31.760135 containerd[1571]: time="2025-07-15T05:20:31.760099286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-133-19,Uid:fceec3381fed0035e1bb17f6a2025e9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1ff0e51397f516ebe4b240172c6d1d56e998a7262f2a17c2371df8ad367c9d9\"" Jul 15 05:20:31.763058 kubelet[2372]: E0715 05:20:31.762873 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:31.768325 containerd[1571]: time="2025-07-15T05:20:31.768245313Z" level=info msg="CreateContainer within sandbox \"e1ff0e51397f516ebe4b240172c6d1d56e998a7262f2a17c2371df8ad367c9d9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 05:20:31.775392 containerd[1571]: time="2025-07-15T05:20:31.775351586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-133-19,Uid:ad6d5674b800fccf7124f93fd4b50c74,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca0f91ac758b6af7ae7f02fa3a2eecb4fad8556f71f26268465400e0778096df\"" Jul 15 05:20:31.778236 kubelet[2372]: E0715 05:20:31.777604 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:31.778995 containerd[1571]: time="2025-07-15T05:20:31.778967768Z" level=info msg="Container a66467618dea9cba0e2e10ca88e5adf0de197961fa42d61ed3c8ae4ea9d6fc43: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:31.780294 containerd[1571]: time="2025-07-15T05:20:31.780268960Z" level=info msg="CreateContainer within sandbox \"ca0f91ac758b6af7ae7f02fa3a2eecb4fad8556f71f26268465400e0778096df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 05:20:31.786109 containerd[1571]: time="2025-07-15T05:20:31.786070874Z" level=info msg="CreateContainer within sandbox \"e1ff0e51397f516ebe4b240172c6d1d56e998a7262f2a17c2371df8ad367c9d9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a66467618dea9cba0e2e10ca88e5adf0de197961fa42d61ed3c8ae4ea9d6fc43\"" Jul 15 05:20:31.787142 containerd[1571]: time="2025-07-15T05:20:31.787117932Z" level=info msg="StartContainer for \"a66467618dea9cba0e2e10ca88e5adf0de197961fa42d61ed3c8ae4ea9d6fc43\"" Jul 15 05:20:31.788448 containerd[1571]: time="2025-07-15T05:20:31.788419124Z" level=info msg="connecting to shim a66467618dea9cba0e2e10ca88e5adf0de197961fa42d61ed3c8ae4ea9d6fc43" address="unix:///run/containerd/s/06c688e3021429fe436d9786b81f1b83e9254b1a2cc4abf05577ebbf3be12801" protocol=ttrpc version=3 Jul 15 05:20:31.791559 containerd[1571]: time="2025-07-15T05:20:31.791363921Z" level=info msg="Container 49feb0edf7c93d99563bd02ac9d429faf7dec55f90ad73c27dabadcc38a73c2b: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:31.792091 containerd[1571]: time="2025-07-15T05:20:31.791889299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-133-19,Uid:951d8873ca793ce5202651ffc5533416,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7e10d15ae740c473e6cd3d69d3f2954d8901dd93e5ff081afbc59b82f814a55\"" Jul 15 05:20:31.792740 kubelet[2372]: E0715 05:20:31.792643 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:31.794170 containerd[1571]: time="2025-07-15T05:20:31.794141533Z" level=info msg="CreateContainer within sandbox \"e7e10d15ae740c473e6cd3d69d3f2954d8901dd93e5ff081afbc59b82f814a55\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 05:20:31.798519 containerd[1571]: time="2025-07-15T05:20:31.798478291Z" level=info msg="CreateContainer within sandbox \"ca0f91ac758b6af7ae7f02fa3a2eecb4fad8556f71f26268465400e0778096df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"49feb0edf7c93d99563bd02ac9d429faf7dec55f90ad73c27dabadcc38a73c2b\"" Jul 15 05:20:31.799179 containerd[1571]: time="2025-07-15T05:20:31.799158669Z" level=info msg="StartContainer for \"49feb0edf7c93d99563bd02ac9d429faf7dec55f90ad73c27dabadcc38a73c2b\"" Jul 15 05:20:31.801284 containerd[1571]: time="2025-07-15T05:20:31.801233756Z" level=info msg="connecting to shim 49feb0edf7c93d99563bd02ac9d429faf7dec55f90ad73c27dabadcc38a73c2b" address="unix:///run/containerd/s/96bb6338ab6c2a08b54496b5fbc64f38397866c82f83ec0dc702cee366afa0fc" protocol=ttrpc version=3 Jul 15 05:20:31.801628 containerd[1571]: time="2025-07-15T05:20:31.801490558Z" level=info msg="Container 6f82dbf1b6173906b9444c87707220af500dd25b344581178e657f9855629aef: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:31.816348 containerd[1571]: time="2025-07-15T05:20:31.816311988Z" level=info msg="CreateContainer within sandbox \"e7e10d15ae740c473e6cd3d69d3f2954d8901dd93e5ff081afbc59b82f814a55\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6f82dbf1b6173906b9444c87707220af500dd25b344581178e657f9855629aef\"" Jul 15 05:20:31.818104 systemd[1]: Started cri-containerd-a66467618dea9cba0e2e10ca88e5adf0de197961fa42d61ed3c8ae4ea9d6fc43.scope - libcontainer container a66467618dea9cba0e2e10ca88e5adf0de197961fa42d61ed3c8ae4ea9d6fc43. Jul 15 05:20:31.819406 containerd[1571]: time="2025-07-15T05:20:31.819377155Z" level=info msg="StartContainer for \"6f82dbf1b6173906b9444c87707220af500dd25b344581178e657f9855629aef\"" Jul 15 05:20:31.825982 containerd[1571]: time="2025-07-15T05:20:31.825181468Z" level=info msg="connecting to shim 6f82dbf1b6173906b9444c87707220af500dd25b344581178e657f9855629aef" address="unix:///run/containerd/s/b5d1870bb3696d1d3480d961839e8d7600a421abd9a60bc0f9084cae8791e2a9" protocol=ttrpc version=3 Jul 15 05:20:31.829632 systemd[1]: Started cri-containerd-49feb0edf7c93d99563bd02ac9d429faf7dec55f90ad73c27dabadcc38a73c2b.scope - libcontainer container 49feb0edf7c93d99563bd02ac9d429faf7dec55f90ad73c27dabadcc38a73c2b. Jul 15 05:20:31.862757 systemd[1]: Started cri-containerd-6f82dbf1b6173906b9444c87707220af500dd25b344581178e657f9855629aef.scope - libcontainer container 6f82dbf1b6173906b9444c87707220af500dd25b344581178e657f9855629aef. Jul 15 05:20:31.907629 containerd[1571]: time="2025-07-15T05:20:31.907451828Z" level=info msg="StartContainer for \"a66467618dea9cba0e2e10ca88e5adf0de197961fa42d61ed3c8ae4ea9d6fc43\" returns successfully" Jul 15 05:20:31.934148 kubelet[2372]: I0715 05:20:31.934086 2372 kubelet_node_status.go:72] "Attempting to register node" node="172-237-133-19" Jul 15 05:20:31.934393 kubelet[2372]: E0715 05:20:31.934364 2372 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.237.133.19:6443/api/v1/nodes\": dial tcp 172.237.133.19:6443: connect: connection refused" node="172-237-133-19" Jul 15 05:20:31.947815 containerd[1571]: time="2025-07-15T05:20:31.947465654Z" level=info msg="StartContainer for \"49feb0edf7c93d99563bd02ac9d429faf7dec55f90ad73c27dabadcc38a73c2b\" returns successfully" Jul 15 05:20:31.957303 containerd[1571]: time="2025-07-15T05:20:31.957282369Z" level=info msg="StartContainer for \"6f82dbf1b6173906b9444c87707220af500dd25b344581178e657f9855629aef\" returns successfully" Jul 15 05:20:32.184815 kubelet[2372]: E0715 05:20:32.184691 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:32.185194 kubelet[2372]: E0715 05:20:32.185168 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:32.189515 kubelet[2372]: E0715 05:20:32.189136 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:32.737114 kubelet[2372]: I0715 05:20:32.737076 2372 kubelet_node_status.go:72] "Attempting to register node" node="172-237-133-19" Jul 15 05:20:33.129462 kubelet[2372]: E0715 05:20:33.129409 2372 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-237-133-19\" not found" node="172-237-133-19" Jul 15 05:20:33.136643 kubelet[2372]: I0715 05:20:33.136610 2372 apiserver.go:52] "Watching apiserver" Jul 15 05:20:33.150491 kubelet[2372]: I0715 05:20:33.150454 2372 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 05:20:33.190001 kubelet[2372]: E0715 05:20:33.189972 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:33.297516 kubelet[2372]: I0715 05:20:33.297425 2372 kubelet_node_status.go:75] "Successfully registered node" node="172-237-133-19" Jul 15 05:20:35.088983 systemd[1]: Reload requested from client PID 2643 ('systemctl') (unit session-7.scope)... Jul 15 05:20:35.089352 systemd[1]: Reloading... Jul 15 05:20:35.189562 zram_generator::config[2696]: No configuration found. Jul 15 05:20:35.270005 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:20:35.384231 systemd[1]: Reloading finished in 294 ms. Jul 15 05:20:35.409689 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:35.438122 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 05:20:35.438403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:35.438447 systemd[1]: kubelet.service: Consumed 886ms CPU time, 130.5M memory peak. Jul 15 05:20:35.441075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:35.642545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:35.648910 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:20:35.696237 kubelet[2738]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:20:35.696237 kubelet[2738]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 05:20:35.696237 kubelet[2738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:20:35.696564 kubelet[2738]: I0715 05:20:35.696357 2738 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:20:35.703550 kubelet[2738]: I0715 05:20:35.702716 2738 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 05:20:35.703550 kubelet[2738]: I0715 05:20:35.702744 2738 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:20:35.703550 kubelet[2738]: I0715 05:20:35.702972 2738 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 05:20:35.704139 kubelet[2738]: I0715 05:20:35.704103 2738 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 05:20:35.707486 kubelet[2738]: I0715 05:20:35.707466 2738 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:20:35.713548 kubelet[2738]: I0715 05:20:35.712341 2738 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:20:35.715772 kubelet[2738]: I0715 05:20:35.715758 2738 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:20:35.715922 kubelet[2738]: I0715 05:20:35.715912 2738 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 05:20:35.716129 kubelet[2738]: I0715 05:20:35.716108 2738 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:20:35.716352 kubelet[2738]: I0715 05:20:35.716177 2738 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-133-19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:20:35.716469 kubelet[2738]: I0715 05:20:35.716457 2738 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:20:35.716536 kubelet[2738]: I0715 05:20:35.716527 2738 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 05:20:35.716602 kubelet[2738]: I0715 05:20:35.716593 2738 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:20:35.716741 kubelet[2738]: I0715 05:20:35.716730 2738 kubelet.go:408] "Attempting to sync node with API server" Jul 15 05:20:35.716796 kubelet[2738]: I0715 05:20:35.716787 2738 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:20:35.716893 kubelet[2738]: I0715 05:20:35.716883 2738 kubelet.go:314] "Adding apiserver pod source" Jul 15 05:20:35.716966 kubelet[2738]: I0715 05:20:35.716936 2738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:20:35.720621 kubelet[2738]: I0715 05:20:35.720607 2738 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:20:35.721022 kubelet[2738]: I0715 05:20:35.721011 2738 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:20:35.721645 kubelet[2738]: I0715 05:20:35.721622 2738 server.go:1274] "Started kubelet" Jul 15 05:20:35.724802 kubelet[2738]: I0715 05:20:35.724647 2738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:20:35.731042 kubelet[2738]: I0715 05:20:35.731009 2738 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:20:35.731688 kubelet[2738]: I0715 05:20:35.731664 2738 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:20:35.731990 kubelet[2738]: I0715 05:20:35.731976 2738 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:20:35.732432 kubelet[2738]: I0715 05:20:35.732418 2738 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:20:35.735476 kubelet[2738]: I0715 05:20:35.735462 2738 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 05:20:35.735728 kubelet[2738]: E0715 05:20:35.735711 2738 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-237-133-19\" not found" Jul 15 05:20:35.738426 kubelet[2738]: I0715 05:20:35.738390 2738 server.go:449] "Adding debug handlers to kubelet server" Jul 15 05:20:35.740774 kubelet[2738]: I0715 05:20:35.740759 2738 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 05:20:35.740986 kubelet[2738]: I0715 05:20:35.740972 2738 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:20:35.741189 kubelet[2738]: I0715 05:20:35.741159 2738 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:20:35.741288 kubelet[2738]: I0715 05:20:35.741256 2738 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:20:35.745908 kubelet[2738]: I0715 05:20:35.745859 2738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:20:35.748153 kubelet[2738]: E0715 05:20:35.748135 2738 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:20:35.748546 kubelet[2738]: I0715 05:20:35.748517 2738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:20:35.748546 kubelet[2738]: I0715 05:20:35.748543 2738 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 05:20:35.748663 kubelet[2738]: I0715 05:20:35.748562 2738 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 05:20:35.748663 kubelet[2738]: E0715 05:20:35.748603 2738 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:20:35.750249 kubelet[2738]: I0715 05:20:35.750232 2738 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:20:35.803995 kubelet[2738]: I0715 05:20:35.803980 2738 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 05:20:35.804100 kubelet[2738]: I0715 05:20:35.804089 2738 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 05:20:35.804148 kubelet[2738]: I0715 05:20:35.804141 2738 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:20:35.804317 kubelet[2738]: I0715 05:20:35.804304 2738 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 05:20:35.804382 kubelet[2738]: I0715 05:20:35.804364 2738 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 05:20:35.804425 kubelet[2738]: I0715 05:20:35.804418 2738 policy_none.go:49] "None policy: Start" Jul 15 05:20:35.805122 kubelet[2738]: I0715 05:20:35.805101 2738 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 05:20:35.805160 kubelet[2738]: I0715 05:20:35.805128 2738 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:20:35.805310 kubelet[2738]: I0715 05:20:35.805280 2738 state_mem.go:75] "Updated machine memory state" Jul 15 05:20:35.810695 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 05:20:35.815432 kubelet[2738]: I0715 05:20:35.815402 2738 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:20:35.815868 kubelet[2738]: I0715 05:20:35.815618 2738 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:20:35.815868 kubelet[2738]: I0715 05:20:35.815635 2738 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:20:35.816306 kubelet[2738]: I0715 05:20:35.816285 2738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:20:35.918798 kubelet[2738]: I0715 05:20:35.918693 2738 kubelet_node_status.go:72] "Attempting to register node" node="172-237-133-19" Jul 15 05:20:35.925758 kubelet[2738]: I0715 05:20:35.925710 2738 kubelet_node_status.go:111] "Node was previously registered" node="172-237-133-19" Jul 15 05:20:35.925826 kubelet[2738]: I0715 05:20:35.925773 2738 kubelet_node_status.go:75] "Successfully registered node" node="172-237-133-19" Jul 15 05:20:36.041866 kubelet[2738]: I0715 05:20:36.041834 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/951d8873ca793ce5202651ffc5533416-kubeconfig\") pod \"kube-scheduler-172-237-133-19\" (UID: \"951d8873ca793ce5202651ffc5533416\") " pod="kube-system/kube-scheduler-172-237-133-19" Jul 15 05:20:36.041866 kubelet[2738]: I0715 05:20:36.041866 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fceec3381fed0035e1bb17f6a2025e9a-k8s-certs\") pod \"kube-apiserver-172-237-133-19\" (UID: \"fceec3381fed0035e1bb17f6a2025e9a\") " pod="kube-system/kube-apiserver-172-237-133-19" Jul 15 05:20:36.042027 kubelet[2738]: I0715 05:20:36.041885 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fceec3381fed0035e1bb17f6a2025e9a-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-133-19\" (UID: \"fceec3381fed0035e1bb17f6a2025e9a\") " pod="kube-system/kube-apiserver-172-237-133-19" Jul 15 05:20:36.042027 kubelet[2738]: I0715 05:20:36.041902 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fceec3381fed0035e1bb17f6a2025e9a-ca-certs\") pod \"kube-apiserver-172-237-133-19\" (UID: \"fceec3381fed0035e1bb17f6a2025e9a\") " pod="kube-system/kube-apiserver-172-237-133-19" Jul 15 05:20:36.042027 kubelet[2738]: I0715 05:20:36.041916 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad6d5674b800fccf7124f93fd4b50c74-ca-certs\") pod \"kube-controller-manager-172-237-133-19\" (UID: \"ad6d5674b800fccf7124f93fd4b50c74\") " pod="kube-system/kube-controller-manager-172-237-133-19" Jul 15 05:20:36.042027 kubelet[2738]: I0715 05:20:36.041929 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad6d5674b800fccf7124f93fd4b50c74-flexvolume-dir\") pod \"kube-controller-manager-172-237-133-19\" (UID: \"ad6d5674b800fccf7124f93fd4b50c74\") " pod="kube-system/kube-controller-manager-172-237-133-19" Jul 15 05:20:36.042027 kubelet[2738]: I0715 05:20:36.041962 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad6d5674b800fccf7124f93fd4b50c74-k8s-certs\") pod \"kube-controller-manager-172-237-133-19\" (UID: \"ad6d5674b800fccf7124f93fd4b50c74\") " pod="kube-system/kube-controller-manager-172-237-133-19" Jul 15 05:20:36.042147 kubelet[2738]: I0715 05:20:36.041975 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad6d5674b800fccf7124f93fd4b50c74-kubeconfig\") pod \"kube-controller-manager-172-237-133-19\" (UID: \"ad6d5674b800fccf7124f93fd4b50c74\") " pod="kube-system/kube-controller-manager-172-237-133-19" Jul 15 05:20:36.042147 kubelet[2738]: I0715 05:20:36.041992 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad6d5674b800fccf7124f93fd4b50c74-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-133-19\" (UID: \"ad6d5674b800fccf7124f93fd4b50c74\") " pod="kube-system/kube-controller-manager-172-237-133-19" Jul 15 05:20:36.157981 kubelet[2738]: E0715 05:20:36.157791 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:36.157981 kubelet[2738]: E0715 05:20:36.157913 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:36.158981 kubelet[2738]: E0715 05:20:36.158905 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:36.719849 kubelet[2738]: I0715 05:20:36.719615 2738 apiserver.go:52] "Watching apiserver" Jul 15 05:20:36.741606 kubelet[2738]: I0715 05:20:36.741568 2738 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 05:20:36.756735 kubelet[2738]: I0715 05:20:36.756649 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-133-19" podStartSLOduration=1.7566364509999999 podStartE2EDuration="1.756636451s" podCreationTimestamp="2025-07-15 05:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:20:36.749081673 +0000 UTC m=+1.094062269" watchObservedRunningTime="2025-07-15 05:20:36.756636451 +0000 UTC m=+1.101617047" Jul 15 05:20:36.767393 kubelet[2738]: I0715 05:20:36.767210 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-133-19" podStartSLOduration=1.767198988 podStartE2EDuration="1.767198988s" podCreationTimestamp="2025-07-15 05:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:20:36.76592082 +0000 UTC m=+1.110901426" watchObservedRunningTime="2025-07-15 05:20:36.767198988 +0000 UTC m=+1.112179584" Jul 15 05:20:36.767393 kubelet[2738]: I0715 05:20:36.767273 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-133-19" podStartSLOduration=1.767269809 podStartE2EDuration="1.767269809s" podCreationTimestamp="2025-07-15 05:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:20:36.75704788 +0000 UTC m=+1.102028486" watchObservedRunningTime="2025-07-15 05:20:36.767269809 +0000 UTC m=+1.112250405" Jul 15 05:20:36.784598 kubelet[2738]: E0715 05:20:36.784550 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:36.795020 kubelet[2738]: E0715 05:20:36.794988 2738 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-237-133-19\" already exists" pod="kube-system/kube-apiserver-172-237-133-19" Jul 15 05:20:36.795188 kubelet[2738]: E0715 05:20:36.795106 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:36.795430 kubelet[2738]: E0715 05:20:36.795336 2738 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-172-237-133-19\" already exists" pod="kube-system/kube-controller-manager-172-237-133-19" Jul 15 05:20:36.795460 kubelet[2738]: E0715 05:20:36.795431 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:37.785703 kubelet[2738]: E0715 05:20:37.785635 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:37.786118 kubelet[2738]: E0715 05:20:37.785888 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:37.786118 kubelet[2738]: E0715 05:20:37.786078 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:40.132322 kubelet[2738]: I0715 05:20:40.132297 2738 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 05:20:40.133114 kubelet[2738]: I0715 05:20:40.132753 2738 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 05:20:40.133172 containerd[1571]: time="2025-07-15T05:20:40.132626974Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 05:20:41.044468 systemd[1]: Created slice kubepods-besteffort-pod46114caf_7341_4546_b81d_12db54f66597.slice - libcontainer container kubepods-besteffort-pod46114caf_7341_4546_b81d_12db54f66597.slice. Jul 15 05:20:41.075829 kubelet[2738]: I0715 05:20:41.075797 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46114caf-7341-4546-b81d-12db54f66597-kube-proxy\") pod \"kube-proxy-g47dr\" (UID: \"46114caf-7341-4546-b81d-12db54f66597\") " pod="kube-system/kube-proxy-g47dr" Jul 15 05:20:41.077694 kubelet[2738]: I0715 05:20:41.077563 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crwbb\" (UniqueName: \"kubernetes.io/projected/46114caf-7341-4546-b81d-12db54f66597-kube-api-access-crwbb\") pod \"kube-proxy-g47dr\" (UID: \"46114caf-7341-4546-b81d-12db54f66597\") " pod="kube-system/kube-proxy-g47dr" Jul 15 05:20:41.077694 kubelet[2738]: I0715 05:20:41.077591 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46114caf-7341-4546-b81d-12db54f66597-xtables-lock\") pod \"kube-proxy-g47dr\" (UID: \"46114caf-7341-4546-b81d-12db54f66597\") " pod="kube-system/kube-proxy-g47dr" Jul 15 05:20:41.077694 kubelet[2738]: I0715 05:20:41.077645 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46114caf-7341-4546-b81d-12db54f66597-lib-modules\") pod \"kube-proxy-g47dr\" (UID: \"46114caf-7341-4546-b81d-12db54f66597\") " pod="kube-system/kube-proxy-g47dr" Jul 15 05:20:41.215280 kubelet[2738]: W0715 05:20:41.215242 2738 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-237-133-19" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node '172-237-133-19' and this object Jul 15 05:20:41.215662 kubelet[2738]: E0715 05:20:41.215283 2738 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-237-133-19\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-237-133-19' and this object" logger="UnhandledError" Jul 15 05:20:41.217387 kubelet[2738]: W0715 05:20:41.217137 2738 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:172-237-133-19" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node '172-237-133-19' and this object Jul 15 05:20:41.217387 kubelet[2738]: E0715 05:20:41.217160 2738 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:172-237-133-19\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-237-133-19' and this object" logger="UnhandledError" Jul 15 05:20:41.219987 systemd[1]: Created slice kubepods-besteffort-pod962972e5_244d_4aad_86de_fa8f1ae2597a.slice - libcontainer container kubepods-besteffort-pod962972e5_244d_4aad_86de_fa8f1ae2597a.slice. Jul 15 05:20:41.279163 kubelet[2738]: I0715 05:20:41.279127 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t544f\" (UniqueName: \"kubernetes.io/projected/962972e5-244d-4aad-86de-fa8f1ae2597a-kube-api-access-t544f\") pod \"tigera-operator-5bf8dfcb4-rfdqz\" (UID: \"962972e5-244d-4aad-86de-fa8f1ae2597a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-rfdqz" Jul 15 05:20:41.279163 kubelet[2738]: I0715 05:20:41.279167 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/962972e5-244d-4aad-86de-fa8f1ae2597a-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-rfdqz\" (UID: \"962972e5-244d-4aad-86de-fa8f1ae2597a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-rfdqz" Jul 15 05:20:41.352648 kubelet[2738]: E0715 05:20:41.352298 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:41.352901 containerd[1571]: time="2025-07-15T05:20:41.352862095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g47dr,Uid:46114caf-7341-4546-b81d-12db54f66597,Namespace:kube-system,Attempt:0,}" Jul 15 05:20:41.369911 containerd[1571]: time="2025-07-15T05:20:41.369881657Z" level=info msg="connecting to shim 505c9da39fde267b86e5d91d9499cde83d5e39bd98924f48ec24c0b3d3168f31" address="unix:///run/containerd/s/51ed83b1975d0922c6740b6c221149f033470b0894246a3a6c3bd441efc97627" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:20:41.402621 systemd[1]: Started cri-containerd-505c9da39fde267b86e5d91d9499cde83d5e39bd98924f48ec24c0b3d3168f31.scope - libcontainer container 505c9da39fde267b86e5d91d9499cde83d5e39bd98924f48ec24c0b3d3168f31. Jul 15 05:20:41.427290 containerd[1571]: time="2025-07-15T05:20:41.427243876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g47dr,Uid:46114caf-7341-4546-b81d-12db54f66597,Namespace:kube-system,Attempt:0,} returns sandbox id \"505c9da39fde267b86e5d91d9499cde83d5e39bd98924f48ec24c0b3d3168f31\"" Jul 15 05:20:41.429382 kubelet[2738]: E0715 05:20:41.429359 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:41.433823 containerd[1571]: time="2025-07-15T05:20:41.433731815Z" level=info msg="CreateContainer within sandbox \"505c9da39fde267b86e5d91d9499cde83d5e39bd98924f48ec24c0b3d3168f31\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 05:20:41.445933 containerd[1571]: time="2025-07-15T05:20:41.445906453Z" level=info msg="Container 65acd8bce7d3cd1aac83a3af4683a5f3d9b00b120c754c49633ffb95d0bd8637: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:41.452256 containerd[1571]: time="2025-07-15T05:20:41.452225514Z" level=info msg="CreateContainer within sandbox \"505c9da39fde267b86e5d91d9499cde83d5e39bd98924f48ec24c0b3d3168f31\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65acd8bce7d3cd1aac83a3af4683a5f3d9b00b120c754c49633ffb95d0bd8637\"" Jul 15 05:20:41.453037 containerd[1571]: time="2025-07-15T05:20:41.452750575Z" level=info msg="StartContainer for \"65acd8bce7d3cd1aac83a3af4683a5f3d9b00b120c754c49633ffb95d0bd8637\"" Jul 15 05:20:41.453982 containerd[1571]: time="2025-07-15T05:20:41.453960911Z" level=info msg="connecting to shim 65acd8bce7d3cd1aac83a3af4683a5f3d9b00b120c754c49633ffb95d0bd8637" address="unix:///run/containerd/s/51ed83b1975d0922c6740b6c221149f033470b0894246a3a6c3bd441efc97627" protocol=ttrpc version=3 Jul 15 05:20:41.472618 systemd[1]: Started cri-containerd-65acd8bce7d3cd1aac83a3af4683a5f3d9b00b120c754c49633ffb95d0bd8637.scope - libcontainer container 65acd8bce7d3cd1aac83a3af4683a5f3d9b00b120c754c49633ffb95d0bd8637. Jul 15 05:20:41.518653 containerd[1571]: time="2025-07-15T05:20:41.518626564Z" level=info msg="StartContainer for \"65acd8bce7d3cd1aac83a3af4683a5f3d9b00b120c754c49633ffb95d0bd8637\" returns successfully" Jul 15 05:20:41.792220 kubelet[2738]: E0715 05:20:41.792171 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:42.387106 kubelet[2738]: E0715 05:20:42.387064 2738 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 15 05:20:42.387106 kubelet[2738]: E0715 05:20:42.387096 2738 projected.go:194] Error preparing data for projected volume kube-api-access-t544f for pod tigera-operator/tigera-operator-5bf8dfcb4-rfdqz: failed to sync configmap cache: timed out waiting for the condition Jul 15 05:20:42.387563 kubelet[2738]: E0715 05:20:42.387155 2738 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/962972e5-244d-4aad-86de-fa8f1ae2597a-kube-api-access-t544f podName:962972e5-244d-4aad-86de-fa8f1ae2597a nodeName:}" failed. No retries permitted until 2025-07-15 05:20:42.88713752 +0000 UTC m=+7.232118116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t544f" (UniqueName: "kubernetes.io/projected/962972e5-244d-4aad-86de-fa8f1ae2597a-kube-api-access-t544f") pod "tigera-operator-5bf8dfcb4-rfdqz" (UID: "962972e5-244d-4aad-86de-fa8f1ae2597a") : failed to sync configmap cache: timed out waiting for the condition Jul 15 05:20:43.023743 containerd[1571]: time="2025-07-15T05:20:43.023702411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-rfdqz,Uid:962972e5-244d-4aad-86de-fa8f1ae2597a,Namespace:tigera-operator,Attempt:0,}" Jul 15 05:20:43.044096 containerd[1571]: time="2025-07-15T05:20:43.044043289Z" level=info msg="connecting to shim c4e4c382bfc8d39025e5b86737786d93f04160bd698aa03d6fe32419b96a39f6" address="unix:///run/containerd/s/d39647f5769c70a169f74ebd733ae71fde8e454b10fecb476875fee64df525ef" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:20:43.071663 systemd[1]: Started cri-containerd-c4e4c382bfc8d39025e5b86737786d93f04160bd698aa03d6fe32419b96a39f6.scope - libcontainer container c4e4c382bfc8d39025e5b86737786d93f04160bd698aa03d6fe32419b96a39f6. Jul 15 05:20:43.121961 containerd[1571]: time="2025-07-15T05:20:43.121911962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-rfdqz,Uid:962972e5-244d-4aad-86de-fa8f1ae2597a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c4e4c382bfc8d39025e5b86737786d93f04160bd698aa03d6fe32419b96a39f6\"" Jul 15 05:20:43.124874 containerd[1571]: time="2025-07-15T05:20:43.124825132Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 05:20:43.996227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396275091.mount: Deactivated successfully. Jul 15 05:20:44.916541 containerd[1571]: time="2025-07-15T05:20:44.916444177Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:44.917376 containerd[1571]: time="2025-07-15T05:20:44.917266953Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 15 05:20:44.917952 containerd[1571]: time="2025-07-15T05:20:44.917915322Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:44.919450 containerd[1571]: time="2025-07-15T05:20:44.919399975Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:44.920250 containerd[1571]: time="2025-07-15T05:20:44.920072338Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.795203167s" Jul 15 05:20:44.920250 containerd[1571]: time="2025-07-15T05:20:44.920111018Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 05:20:44.923963 containerd[1571]: time="2025-07-15T05:20:44.923925105Z" level=info msg="CreateContainer within sandbox \"c4e4c382bfc8d39025e5b86737786d93f04160bd698aa03d6fe32419b96a39f6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 05:20:44.934168 containerd[1571]: time="2025-07-15T05:20:44.933667374Z" level=info msg="Container 9daa7f557445b73fa2a65d10e13d899c298d3b6bf104ae78862af7a339730bab: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:44.949654 containerd[1571]: time="2025-07-15T05:20:44.949605729Z" level=info msg="CreateContainer within sandbox \"c4e4c382bfc8d39025e5b86737786d93f04160bd698aa03d6fe32419b96a39f6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9daa7f557445b73fa2a65d10e13d899c298d3b6bf104ae78862af7a339730bab\"" Jul 15 05:20:44.950561 containerd[1571]: time="2025-07-15T05:20:44.950458747Z" level=info msg="StartContainer for \"9daa7f557445b73fa2a65d10e13d899c298d3b6bf104ae78862af7a339730bab\"" Jul 15 05:20:44.951908 containerd[1571]: time="2025-07-15T05:20:44.951876756Z" level=info msg="connecting to shim 9daa7f557445b73fa2a65d10e13d899c298d3b6bf104ae78862af7a339730bab" address="unix:///run/containerd/s/d39647f5769c70a169f74ebd733ae71fde8e454b10fecb476875fee64df525ef" protocol=ttrpc version=3 Jul 15 05:20:44.974627 systemd[1]: Started cri-containerd-9daa7f557445b73fa2a65d10e13d899c298d3b6bf104ae78862af7a339730bab.scope - libcontainer container 9daa7f557445b73fa2a65d10e13d899c298d3b6bf104ae78862af7a339730bab. Jul 15 05:20:45.002146 containerd[1571]: time="2025-07-15T05:20:45.002098974Z" level=info msg="StartContainer for \"9daa7f557445b73fa2a65d10e13d899c298d3b6bf104ae78862af7a339730bab\" returns successfully" Jul 15 05:20:45.808534 kubelet[2738]: I0715 05:20:45.808322 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g47dr" podStartSLOduration=4.808304615 podStartE2EDuration="4.808304615s" podCreationTimestamp="2025-07-15 05:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:20:41.801901114 +0000 UTC m=+6.146881710" watchObservedRunningTime="2025-07-15 05:20:45.808304615 +0000 UTC m=+10.153285211" Jul 15 05:20:45.810244 kubelet[2738]: I0715 05:20:45.810184 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-rfdqz" podStartSLOduration=3.012887971 podStartE2EDuration="4.810173062s" podCreationTimestamp="2025-07-15 05:20:41 +0000 UTC" firstStartedPulling="2025-07-15 05:20:43.123549829 +0000 UTC m=+7.468530425" lastFinishedPulling="2025-07-15 05:20:44.92083492 +0000 UTC m=+9.265815516" observedRunningTime="2025-07-15 05:20:45.810168183 +0000 UTC m=+10.155148789" watchObservedRunningTime="2025-07-15 05:20:45.810173062 +0000 UTC m=+10.155153658" Jul 15 05:20:46.232659 kubelet[2738]: E0715 05:20:46.232398 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:46.600535 kubelet[2738]: E0715 05:20:46.600270 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:46.743612 kubelet[2738]: E0715 05:20:46.743540 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:46.803868 kubelet[2738]: E0715 05:20:46.803814 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:50.017686 update_engine[1544]: I20250715 05:20:50.017597 1544 update_attempter.cc:509] Updating boot flags... Jul 15 05:20:50.563351 sudo[1807]: pam_unix(sudo:session): session closed for user root Jul 15 05:20:50.615245 sshd[1806]: Connection closed by 139.178.68.195 port 56194 Jul 15 05:20:50.618665 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:50.624054 systemd-logind[1541]: Session 7 logged out. Waiting for processes to exit. Jul 15 05:20:50.627276 systemd[1]: sshd@6-172.237.133.19:22-139.178.68.195:56194.service: Deactivated successfully. Jul 15 05:20:50.630561 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 05:20:50.631076 systemd[1]: session-7.scope: Consumed 3.651s CPU time, 223M memory peak. Jul 15 05:20:50.633890 systemd-logind[1541]: Removed session 7. Jul 15 05:20:53.417867 systemd[1]: Created slice kubepods-besteffort-podd17f3d8a_7ffa_45de_8edd_57f7eaceb532.slice - libcontainer container kubepods-besteffort-podd17f3d8a_7ffa_45de_8edd_57f7eaceb532.slice. Jul 15 05:20:53.449625 kubelet[2738]: I0715 05:20:53.449552 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d17f3d8a-7ffa-45de-8edd-57f7eaceb532-typha-certs\") pod \"calico-typha-5bbc65df77-hmmh9\" (UID: \"d17f3d8a-7ffa-45de-8edd-57f7eaceb532\") " pod="calico-system/calico-typha-5bbc65df77-hmmh9" Jul 15 05:20:53.449625 kubelet[2738]: I0715 05:20:53.449604 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6jxb\" (UniqueName: \"kubernetes.io/projected/d17f3d8a-7ffa-45de-8edd-57f7eaceb532-kube-api-access-q6jxb\") pod \"calico-typha-5bbc65df77-hmmh9\" (UID: \"d17f3d8a-7ffa-45de-8edd-57f7eaceb532\") " pod="calico-system/calico-typha-5bbc65df77-hmmh9" Jul 15 05:20:53.449625 kubelet[2738]: I0715 05:20:53.449627 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d17f3d8a-7ffa-45de-8edd-57f7eaceb532-tigera-ca-bundle\") pod \"calico-typha-5bbc65df77-hmmh9\" (UID: \"d17f3d8a-7ffa-45de-8edd-57f7eaceb532\") " pod="calico-system/calico-typha-5bbc65df77-hmmh9" Jul 15 05:20:53.724969 kubelet[2738]: E0715 05:20:53.724285 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:53.726705 containerd[1571]: time="2025-07-15T05:20:53.726618169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bbc65df77-hmmh9,Uid:d17f3d8a-7ffa-45de-8edd-57f7eaceb532,Namespace:calico-system,Attempt:0,}" Jul 15 05:20:53.763601 systemd[1]: Created slice kubepods-besteffort-pod7bb49edd_b3ed_43ab_be0c_0d9cf8565314.slice - libcontainer container kubepods-besteffort-pod7bb49edd_b3ed_43ab_be0c_0d9cf8565314.slice. Jul 15 05:20:53.769001 containerd[1571]: time="2025-07-15T05:20:53.768951995Z" level=info msg="connecting to shim 4cdc716abe824a4db38dda98ca627fde9dedccf8411236a37187161de0fc9322" address="unix:///run/containerd/s/e07e4647dfceaefa220af1ba408f481f5350226428caeb4e4fc3eee69a56146c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:20:53.810724 systemd[1]: Started cri-containerd-4cdc716abe824a4db38dda98ca627fde9dedccf8411236a37187161de0fc9322.scope - libcontainer container 4cdc716abe824a4db38dda98ca627fde9dedccf8411236a37187161de0fc9322. Jul 15 05:20:53.852677 kubelet[2738]: I0715 05:20:53.851629 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7bb49edd-b3ed-43ab-be0c-0d9cf8565314-cni-net-dir\") pod \"calico-node-jl9c8\" (UID: \"7bb49edd-b3ed-43ab-be0c-0d9cf8565314\") " pod="calico-system/calico-node-jl9c8" Jul 15 05:20:53.852677 kubelet[2738]: I0715 05:20:53.851663 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7bb49edd-b3ed-43ab-be0c-0d9cf8565314-policysync\") pod \"calico-node-jl9c8\" (UID: \"7bb49edd-b3ed-43ab-be0c-0d9cf8565314\") " pod="calico-system/calico-node-jl9c8" Jul 15 05:20:53.852677 kubelet[2738]: I0715 05:20:53.851688 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lr65\" (UniqueName: \"kubernetes.io/projected/7bb49edd-b3ed-43ab-be0c-0d9cf8565314-kube-api-access-4lr65\") pod \"calico-node-jl9c8\" (UID: \"7bb49edd-b3ed-43ab-be0c-0d9cf8565314\") " pod="calico-system/calico-node-jl9c8" Jul 15 05:20:53.852677 kubelet[2738]: I0715 05:20:53.851708 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7bb49edd-b3ed-43ab-be0c-0d9cf8565314-var-run-calico\") pod \"calico-node-jl9c8\" (UID: \"7bb49edd-b3ed-43ab-be0c-0d9cf8565314\") " pod="calico-system/calico-node-jl9c8" Jul 15 05:20:53.852677 kubelet[2738]: I0715 05:20:53.851725 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7bb49edd-b3ed-43ab-be0c-0d9cf8565314-flexvol-driver-host\") pod \"calico-node-jl9c8\" (UID: \"7bb49edd-b3ed-43ab-be0c-0d9cf8565314\") " pod="calico-system/calico-node-jl9c8" Jul 15 05:20:53.852869 kubelet[2738]: I0715 05:20:53.851740 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7bb49edd-b3ed-43ab-be0c-0d9cf8565314-cni-bin-dir\") pod \"calico-node-jl9c8\" (UID: \"7bb49edd-b3ed-43ab-be0c-0d9cf8565314\") " pod="calico-system/calico-node-jl9c8" Jul 15 05:20:53.852869 kubelet[2738]: I0715 05:20:53.851756 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bb49edd-b3ed-43ab-be0c-0d9cf8565314-tigera-ca-bundle\") pod \"calico-node-jl9c8\" (UID: \"7bb49edd-b3ed-43ab-be0c-0d9cf8565314\") " pod="calico-system/calico-node-jl9c8" Jul 15 05:20:53.852869 kubelet[2738]: I0715 05:20:53.851775 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7bb49edd-b3ed-43ab-be0c-0d9cf8565314-node-certs\") pod \"calico-node-jl9c8\" (UID: \"7bb49edd-b3ed-43ab-be0c-0d9cf8565314\") " pod="calico-system/calico-node-jl9c8" Jul 15 05:20:53.852869 kubelet[2738]: I0715 05:20:53.851792 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7bb49edd-b3ed-43ab-be0c-0d9cf8565314-cni-log-dir\") pod \"calico-node-jl9c8\" (UID: \"7bb49edd-b3ed-43ab-be0c-0d9cf8565314\") " pod="calico-system/calico-node-jl9c8" Jul 15 05:20:53.852869 kubelet[2738]: I0715 05:20:53.851807 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bb49edd-b3ed-43ab-be0c-0d9cf8565314-lib-modules\") pod \"calico-node-jl9c8\" (UID: \"7bb49edd-b3ed-43ab-be0c-0d9cf8565314\") " pod="calico-system/calico-node-jl9c8" Jul 15 05:20:53.852970 kubelet[2738]: I0715 05:20:53.851824 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7bb49edd-b3ed-43ab-be0c-0d9cf8565314-var-lib-calico\") pod \"calico-node-jl9c8\" (UID: \"7bb49edd-b3ed-43ab-be0c-0d9cf8565314\") " pod="calico-system/calico-node-jl9c8" Jul 15 05:20:53.852970 kubelet[2738]: I0715 05:20:53.851841 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bb49edd-b3ed-43ab-be0c-0d9cf8565314-xtables-lock\") pod \"calico-node-jl9c8\" (UID: \"7bb49edd-b3ed-43ab-be0c-0d9cf8565314\") " pod="calico-system/calico-node-jl9c8" Jul 15 05:20:53.895882 containerd[1571]: time="2025-07-15T05:20:53.895823034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bbc65df77-hmmh9,Uid:d17f3d8a-7ffa-45de-8edd-57f7eaceb532,Namespace:calico-system,Attempt:0,} returns sandbox id \"4cdc716abe824a4db38dda98ca627fde9dedccf8411236a37187161de0fc9322\"" Jul 15 05:20:53.897065 kubelet[2738]: E0715 05:20:53.897040 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:53.898480 containerd[1571]: time="2025-07-15T05:20:53.898450281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 05:20:53.969578 kubelet[2738]: E0715 05:20:53.969400 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:53.969578 kubelet[2738]: W0715 05:20:53.969422 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:53.969578 kubelet[2738]: E0715 05:20:53.969455 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:53.977554 kubelet[2738]: E0715 05:20:53.975209 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:53.977554 kubelet[2738]: W0715 05:20:53.975237 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:53.977554 kubelet[2738]: E0715 05:20:53.975262 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.038199 kubelet[2738]: E0715 05:20:54.038148 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjjqr" podUID="2906217e-d60f-46de-8e0a-40e519ff8ae1" Jul 15 05:20:54.054635 kubelet[2738]: E0715 05:20:54.054602 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.054635 kubelet[2738]: W0715 05:20:54.054627 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.054725 kubelet[2738]: E0715 05:20:54.054650 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.055016 kubelet[2738]: E0715 05:20:54.054988 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.055016 kubelet[2738]: W0715 05:20:54.055003 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.055016 kubelet[2738]: E0715 05:20:54.055013 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.055658 kubelet[2738]: E0715 05:20:54.055629 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.055658 kubelet[2738]: W0715 05:20:54.055649 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.055658 kubelet[2738]: E0715 05:20:54.055659 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.055924 kubelet[2738]: E0715 05:20:54.055903 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.055924 kubelet[2738]: W0715 05:20:54.055918 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.055924 kubelet[2738]: E0715 05:20:54.055927 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.056657 kubelet[2738]: E0715 05:20:54.056639 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.056657 kubelet[2738]: W0715 05:20:54.056653 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.056718 kubelet[2738]: E0715 05:20:54.056662 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.056973 kubelet[2738]: E0715 05:20:54.056952 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.056973 kubelet[2738]: W0715 05:20:54.056971 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.057073 kubelet[2738]: E0715 05:20:54.056982 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.058256 kubelet[2738]: E0715 05:20:54.058238 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.058256 kubelet[2738]: W0715 05:20:54.058252 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.058256 kubelet[2738]: E0715 05:20:54.058263 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.058598 kubelet[2738]: E0715 05:20:54.058580 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.058598 kubelet[2738]: W0715 05:20:54.058594 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.058696 kubelet[2738]: E0715 05:20:54.058603 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.059739 kubelet[2738]: E0715 05:20:54.059720 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.059739 kubelet[2738]: W0715 05:20:54.059736 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.059814 kubelet[2738]: E0715 05:20:54.059747 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.059968 kubelet[2738]: E0715 05:20:54.059941 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.059968 kubelet[2738]: W0715 05:20:54.059962 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.060041 kubelet[2738]: E0715 05:20:54.059971 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.060230 kubelet[2738]: E0715 05:20:54.060211 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.060230 kubelet[2738]: W0715 05:20:54.060226 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.060294 kubelet[2738]: E0715 05:20:54.060236 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.060462 kubelet[2738]: E0715 05:20:54.060444 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.060462 kubelet[2738]: W0715 05:20:54.060459 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.060462 kubelet[2738]: E0715 05:20:54.060467 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.061849 kubelet[2738]: E0715 05:20:54.061822 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.061849 kubelet[2738]: W0715 05:20:54.061842 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.061849 kubelet[2738]: E0715 05:20:54.061851 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.062534 kubelet[2738]: E0715 05:20:54.062489 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.062534 kubelet[2738]: W0715 05:20:54.062529 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.062785 kubelet[2738]: E0715 05:20:54.062539 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.062785 kubelet[2738]: E0715 05:20:54.062734 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.062785 kubelet[2738]: W0715 05:20:54.062741 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.062785 kubelet[2738]: E0715 05:20:54.062749 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.063135 kubelet[2738]: E0715 05:20:54.062925 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.063135 kubelet[2738]: W0715 05:20:54.062932 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.063135 kubelet[2738]: E0715 05:20:54.062940 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.063135 kubelet[2738]: E0715 05:20:54.063096 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.063135 kubelet[2738]: W0715 05:20:54.063103 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.063135 kubelet[2738]: E0715 05:20:54.063115 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.063389 kubelet[2738]: E0715 05:20:54.063258 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.063389 kubelet[2738]: W0715 05:20:54.063265 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.063389 kubelet[2738]: E0715 05:20:54.063273 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.063819 kubelet[2738]: E0715 05:20:54.063418 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.063819 kubelet[2738]: W0715 05:20:54.063426 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.063819 kubelet[2738]: E0715 05:20:54.063433 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.063819 kubelet[2738]: E0715 05:20:54.063643 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.063819 kubelet[2738]: W0715 05:20:54.063650 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.063819 kubelet[2738]: E0715 05:20:54.063658 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.069517 containerd[1571]: time="2025-07-15T05:20:54.069454810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jl9c8,Uid:7bb49edd-b3ed-43ab-be0c-0d9cf8565314,Namespace:calico-system,Attempt:0,}" Jul 15 05:20:54.090536 containerd[1571]: time="2025-07-15T05:20:54.090461799Z" level=info msg="connecting to shim e0a8b835ef569abb3a69a5e0219c5062a2e95c48ced98fef2eabb3bfd4493b07" address="unix:///run/containerd/s/e60642e2e387601a0e58320f6dc6a296f44f3534cda5a6e3d7aebfb0012a8ac8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:20:54.120656 systemd[1]: Started cri-containerd-e0a8b835ef569abb3a69a5e0219c5062a2e95c48ced98fef2eabb3bfd4493b07.scope - libcontainer container e0a8b835ef569abb3a69a5e0219c5062a2e95c48ced98fef2eabb3bfd4493b07. Jul 15 05:20:54.149460 containerd[1571]: time="2025-07-15T05:20:54.149410389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jl9c8,Uid:7bb49edd-b3ed-43ab-be0c-0d9cf8565314,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0a8b835ef569abb3a69a5e0219c5062a2e95c48ced98fef2eabb3bfd4493b07\"" Jul 15 05:20:54.153859 kubelet[2738]: E0715 05:20:54.153821 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.154034 kubelet[2738]: W0715 05:20:54.153958 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.154034 kubelet[2738]: E0715 05:20:54.153986 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.154142 kubelet[2738]: I0715 05:20:54.154127 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2906217e-d60f-46de-8e0a-40e519ff8ae1-kubelet-dir\") pod \"csi-node-driver-zjjqr\" (UID: \"2906217e-d60f-46de-8e0a-40e519ff8ae1\") " pod="calico-system/csi-node-driver-zjjqr" Jul 15 05:20:54.154690 kubelet[2738]: E0715 05:20:54.154598 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.154690 kubelet[2738]: W0715 05:20:54.154625 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.154690 kubelet[2738]: E0715 05:20:54.154643 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.154817 kubelet[2738]: E0715 05:20:54.154794 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.154817 kubelet[2738]: W0715 05:20:54.154809 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.154974 kubelet[2738]: E0715 05:20:54.154818 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.154974 kubelet[2738]: E0715 05:20:54.154969 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.154974 kubelet[2738]: W0715 05:20:54.154976 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.155047 kubelet[2738]: E0715 05:20:54.154985 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.155047 kubelet[2738]: I0715 05:20:54.155012 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2906217e-d60f-46de-8e0a-40e519ff8ae1-socket-dir\") pod \"csi-node-driver-zjjqr\" (UID: \"2906217e-d60f-46de-8e0a-40e519ff8ae1\") " pod="calico-system/csi-node-driver-zjjqr" Jul 15 05:20:54.155190 kubelet[2738]: E0715 05:20:54.155163 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.155190 kubelet[2738]: W0715 05:20:54.155182 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.155190 kubelet[2738]: E0715 05:20:54.155190 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.155294 kubelet[2738]: I0715 05:20:54.155205 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2906217e-d60f-46de-8e0a-40e519ff8ae1-registration-dir\") pod \"csi-node-driver-zjjqr\" (UID: \"2906217e-d60f-46de-8e0a-40e519ff8ae1\") " pod="calico-system/csi-node-driver-zjjqr" Jul 15 05:20:54.155843 kubelet[2738]: E0715 05:20:54.155816 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.155876 kubelet[2738]: W0715 05:20:54.155849 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.155876 kubelet[2738]: E0715 05:20:54.155860 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.155876 kubelet[2738]: I0715 05:20:54.155874 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2906217e-d60f-46de-8e0a-40e519ff8ae1-varrun\") pod \"csi-node-driver-zjjqr\" (UID: \"2906217e-d60f-46de-8e0a-40e519ff8ae1\") " pod="calico-system/csi-node-driver-zjjqr" Jul 15 05:20:54.156045 kubelet[2738]: E0715 05:20:54.156028 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.156045 kubelet[2738]: W0715 05:20:54.156041 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.156092 kubelet[2738]: E0715 05:20:54.156049 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.156092 kubelet[2738]: I0715 05:20:54.156062 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg9bs\" (UniqueName: \"kubernetes.io/projected/2906217e-d60f-46de-8e0a-40e519ff8ae1-kube-api-access-fg9bs\") pod \"csi-node-driver-zjjqr\" (UID: \"2906217e-d60f-46de-8e0a-40e519ff8ae1\") " pod="calico-system/csi-node-driver-zjjqr" Jul 15 05:20:54.156262 kubelet[2738]: E0715 05:20:54.156243 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.156262 kubelet[2738]: W0715 05:20:54.156257 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.156335 kubelet[2738]: E0715 05:20:54.156274 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.156485 kubelet[2738]: E0715 05:20:54.156468 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.156485 kubelet[2738]: W0715 05:20:54.156481 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.156678 kubelet[2738]: E0715 05:20:54.156616 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.156714 kubelet[2738]: E0715 05:20:54.156709 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.156735 kubelet[2738]: W0715 05:20:54.156716 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.156759 kubelet[2738]: E0715 05:20:54.156743 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.156970 kubelet[2738]: E0715 05:20:54.156953 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.156970 kubelet[2738]: W0715 05:20:54.156966 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.157028 kubelet[2738]: E0715 05:20:54.156987 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.157208 kubelet[2738]: E0715 05:20:54.157191 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.157208 kubelet[2738]: W0715 05:20:54.157204 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.157269 kubelet[2738]: E0715 05:20:54.157224 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.157452 kubelet[2738]: E0715 05:20:54.157436 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.157452 kubelet[2738]: W0715 05:20:54.157449 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.157567 kubelet[2738]: E0715 05:20:54.157550 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.157728 kubelet[2738]: E0715 05:20:54.157711 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.157728 kubelet[2738]: W0715 05:20:54.157724 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.157790 kubelet[2738]: E0715 05:20:54.157732 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.157907 kubelet[2738]: E0715 05:20:54.157892 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.157907 kubelet[2738]: W0715 05:20:54.157904 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.157957 kubelet[2738]: E0715 05:20:54.157914 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.259135 kubelet[2738]: E0715 05:20:54.257549 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.259135 kubelet[2738]: W0715 05:20:54.257578 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.259135 kubelet[2738]: E0715 05:20:54.257601 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.259135 kubelet[2738]: E0715 05:20:54.257902 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.259135 kubelet[2738]: W0715 05:20:54.257911 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.259135 kubelet[2738]: E0715 05:20:54.257934 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.259135 kubelet[2738]: E0715 05:20:54.258129 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.259135 kubelet[2738]: W0715 05:20:54.258137 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.259135 kubelet[2738]: E0715 05:20:54.258157 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.259135 kubelet[2738]: E0715 05:20:54.258363 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.262283 kubelet[2738]: W0715 05:20:54.258371 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.262283 kubelet[2738]: E0715 05:20:54.258391 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.262283 kubelet[2738]: E0715 05:20:54.258615 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.262283 kubelet[2738]: W0715 05:20:54.258622 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.262283 kubelet[2738]: E0715 05:20:54.258642 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.262283 kubelet[2738]: E0715 05:20:54.258835 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.262283 kubelet[2738]: W0715 05:20:54.258842 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.262283 kubelet[2738]: E0715 05:20:54.258922 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.262283 kubelet[2738]: E0715 05:20:54.259035 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.262283 kubelet[2738]: W0715 05:20:54.259041 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.262661 kubelet[2738]: E0715 05:20:54.259118 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.262661 kubelet[2738]: E0715 05:20:54.259228 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.262661 kubelet[2738]: W0715 05:20:54.259234 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.262661 kubelet[2738]: E0715 05:20:54.259310 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.262661 kubelet[2738]: E0715 05:20:54.259426 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.262661 kubelet[2738]: W0715 05:20:54.259432 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.262661 kubelet[2738]: E0715 05:20:54.259451 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.262661 kubelet[2738]: E0715 05:20:54.260553 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.262661 kubelet[2738]: W0715 05:20:54.260561 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.262661 kubelet[2738]: E0715 05:20:54.260595 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.263198 kubelet[2738]: E0715 05:20:54.260765 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.263198 kubelet[2738]: W0715 05:20:54.260773 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.263198 kubelet[2738]: E0715 05:20:54.260851 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.263198 kubelet[2738]: E0715 05:20:54.260960 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.263198 kubelet[2738]: W0715 05:20:54.260967 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.263198 kubelet[2738]: E0715 05:20:54.261074 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.263198 kubelet[2738]: E0715 05:20:54.261128 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.263198 kubelet[2738]: W0715 05:20:54.261135 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.263198 kubelet[2738]: E0715 05:20:54.261213 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.263198 kubelet[2738]: E0715 05:20:54.261318 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.263831 kubelet[2738]: W0715 05:20:54.261324 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.263831 kubelet[2738]: E0715 05:20:54.261401 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.263831 kubelet[2738]: E0715 05:20:54.261536 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.263831 kubelet[2738]: W0715 05:20:54.261544 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.263831 kubelet[2738]: E0715 05:20:54.261619 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.263831 kubelet[2738]: E0715 05:20:54.261824 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.263831 kubelet[2738]: W0715 05:20:54.261832 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.263831 kubelet[2738]: E0715 05:20:54.261856 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.263831 kubelet[2738]: E0715 05:20:54.262463 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.263831 kubelet[2738]: W0715 05:20:54.262471 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.264049 kubelet[2738]: E0715 05:20:54.262540 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.264049 kubelet[2738]: E0715 05:20:54.262872 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.264049 kubelet[2738]: W0715 05:20:54.262881 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.264049 kubelet[2738]: E0715 05:20:54.262967 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.264049 kubelet[2738]: E0715 05:20:54.263522 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.264049 kubelet[2738]: W0715 05:20:54.263532 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.264049 kubelet[2738]: E0715 05:20:54.263611 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.264049 kubelet[2738]: E0715 05:20:54.263727 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.264049 kubelet[2738]: W0715 05:20:54.263734 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.264049 kubelet[2738]: E0715 05:20:54.263813 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.264488 kubelet[2738]: E0715 05:20:54.264018 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.264488 kubelet[2738]: W0715 05:20:54.264025 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.264640 kubelet[2738]: E0715 05:20:54.264611 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.264814 kubelet[2738]: E0715 05:20:54.264787 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.264814 kubelet[2738]: W0715 05:20:54.264804 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.265001 kubelet[2738]: E0715 05:20:54.264946 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.265043 kubelet[2738]: E0715 05:20:54.265014 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.265043 kubelet[2738]: W0715 05:20:54.265021 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.265103 kubelet[2738]: E0715 05:20:54.265096 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.267003 kubelet[2738]: E0715 05:20:54.266961 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.267003 kubelet[2738]: W0715 05:20:54.266980 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.267534 kubelet[2738]: E0715 05:20:54.267077 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.267534 kubelet[2738]: E0715 05:20:54.267254 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.267534 kubelet[2738]: W0715 05:20:54.267261 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.267534 kubelet[2738]: E0715 05:20:54.267270 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:54.277077 kubelet[2738]: E0715 05:20:54.277050 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:54.277077 kubelet[2738]: W0715 05:20:54.277068 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:54.277077 kubelet[2738]: E0715 05:20:54.277078 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.170708 containerd[1571]: time="2025-07-15T05:20:55.170664172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:55.171531 containerd[1571]: time="2025-07-15T05:20:55.171390424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 15 05:20:55.172217 containerd[1571]: time="2025-07-15T05:20:55.172182558Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:55.173712 containerd[1571]: time="2025-07-15T05:20:55.173680366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:55.174232 containerd[1571]: time="2025-07-15T05:20:55.174201174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.275524073s" Jul 15 05:20:55.174295 containerd[1571]: time="2025-07-15T05:20:55.174281904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 05:20:55.175630 containerd[1571]: time="2025-07-15T05:20:55.175611443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 05:20:55.190947 containerd[1571]: time="2025-07-15T05:20:55.190911523Z" level=info msg="CreateContainer within sandbox \"4cdc716abe824a4db38dda98ca627fde9dedccf8411236a37187161de0fc9322\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 05:20:55.198599 containerd[1571]: time="2025-07-15T05:20:55.198575687Z" level=info msg="Container fa29860125dee2429da41a7674e0757cf8a07b4339d8e6907b093098303f4174: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:55.203995 containerd[1571]: time="2025-07-15T05:20:55.203958236Z" level=info msg="CreateContainer within sandbox \"4cdc716abe824a4db38dda98ca627fde9dedccf8411236a37187161de0fc9322\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fa29860125dee2429da41a7674e0757cf8a07b4339d8e6907b093098303f4174\"" Jul 15 05:20:55.204427 containerd[1571]: time="2025-07-15T05:20:55.204394673Z" level=info msg="StartContainer for \"fa29860125dee2429da41a7674e0757cf8a07b4339d8e6907b093098303f4174\"" Jul 15 05:20:55.205328 containerd[1571]: time="2025-07-15T05:20:55.205294214Z" level=info msg="connecting to shim fa29860125dee2429da41a7674e0757cf8a07b4339d8e6907b093098303f4174" address="unix:///run/containerd/s/e07e4647dfceaefa220af1ba408f481f5350226428caeb4e4fc3eee69a56146c" protocol=ttrpc version=3 Jul 15 05:20:55.228757 systemd[1]: Started cri-containerd-fa29860125dee2429da41a7674e0757cf8a07b4339d8e6907b093098303f4174.scope - libcontainer container fa29860125dee2429da41a7674e0757cf8a07b4339d8e6907b093098303f4174. Jul 15 05:20:55.288336 containerd[1571]: time="2025-07-15T05:20:55.288257112Z" level=info msg="StartContainer for \"fa29860125dee2429da41a7674e0757cf8a07b4339d8e6907b093098303f4174\" returns successfully" Jul 15 05:20:55.749562 kubelet[2738]: E0715 05:20:55.749097 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjjqr" podUID="2906217e-d60f-46de-8e0a-40e519ff8ae1" Jul 15 05:20:55.837850 kubelet[2738]: E0715 05:20:55.837821 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:55.854508 kubelet[2738]: I0715 05:20:55.854367 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bbc65df77-hmmh9" podStartSLOduration=1.577095013 podStartE2EDuration="2.854347599s" podCreationTimestamp="2025-07-15 05:20:53 +0000 UTC" firstStartedPulling="2025-07-15 05:20:53.898132665 +0000 UTC m=+18.243113261" lastFinishedPulling="2025-07-15 05:20:55.175385241 +0000 UTC m=+19.520365847" observedRunningTime="2025-07-15 05:20:55.853954657 +0000 UTC m=+20.198935263" watchObservedRunningTime="2025-07-15 05:20:55.854347599 +0000 UTC m=+20.199328195" Jul 15 05:20:55.876685 kubelet[2738]: E0715 05:20:55.876552 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.876685 kubelet[2738]: W0715 05:20:55.876573 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.876685 kubelet[2738]: E0715 05:20:55.876594 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.876971 kubelet[2738]: E0715 05:20:55.876872 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.876971 kubelet[2738]: W0715 05:20:55.876883 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.876971 kubelet[2738]: E0715 05:20:55.876893 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.877211 kubelet[2738]: E0715 05:20:55.877108 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.877211 kubelet[2738]: W0715 05:20:55.877117 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.877211 kubelet[2738]: E0715 05:20:55.877126 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.877346 kubelet[2738]: E0715 05:20:55.877336 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.877470 kubelet[2738]: W0715 05:20:55.877381 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.877470 kubelet[2738]: E0715 05:20:55.877392 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.877779 kubelet[2738]: E0715 05:20:55.877729 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.877779 kubelet[2738]: W0715 05:20:55.877738 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.877779 kubelet[2738]: E0715 05:20:55.877746 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.878228 kubelet[2738]: E0715 05:20:55.878106 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.878327 kubelet[2738]: W0715 05:20:55.878275 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.878327 kubelet[2738]: E0715 05:20:55.878290 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.878758 kubelet[2738]: E0715 05:20:55.878692 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.878758 kubelet[2738]: W0715 05:20:55.878702 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.878758 kubelet[2738]: E0715 05:20:55.878710 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.879201 kubelet[2738]: E0715 05:20:55.879174 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.879442 kubelet[2738]: W0715 05:20:55.879333 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.879442 kubelet[2738]: E0715 05:20:55.879346 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.879645 kubelet[2738]: E0715 05:20:55.879628 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.879779 kubelet[2738]: W0715 05:20:55.879723 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.879779 kubelet[2738]: E0715 05:20:55.879735 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.880252 kubelet[2738]: E0715 05:20:55.880148 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.880252 kubelet[2738]: W0715 05:20:55.880158 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.880252 kubelet[2738]: E0715 05:20:55.880166 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.880676 kubelet[2738]: E0715 05:20:55.880491 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.880676 kubelet[2738]: W0715 05:20:55.880533 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.880676 kubelet[2738]: E0715 05:20:55.880542 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.881038 kubelet[2738]: E0715 05:20:55.880909 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.881102 kubelet[2738]: W0715 05:20:55.881090 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.881153 kubelet[2738]: E0715 05:20:55.881143 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.881699 kubelet[2738]: E0715 05:20:55.881644 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.881699 kubelet[2738]: W0715 05:20:55.881654 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.881699 kubelet[2738]: E0715 05:20:55.881663 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.882116 kubelet[2738]: E0715 05:20:55.881996 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.882116 kubelet[2738]: W0715 05:20:55.882006 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.882116 kubelet[2738]: E0715 05:20:55.882014 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.882665 kubelet[2738]: E0715 05:20:55.882489 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.882665 kubelet[2738]: W0715 05:20:55.882622 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.882665 kubelet[2738]: E0715 05:20:55.882631 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.913612 containerd[1571]: time="2025-07-15T05:20:55.913575097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:55.916324 containerd[1571]: time="2025-07-15T05:20:55.914785741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 15 05:20:55.916324 containerd[1571]: time="2025-07-15T05:20:55.914836954Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:55.917876 containerd[1571]: time="2025-07-15T05:20:55.917840992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:55.918617 containerd[1571]: time="2025-07-15T05:20:55.918563604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 742.755475ms" Jul 15 05:20:55.918664 containerd[1571]: time="2025-07-15T05:20:55.918616278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 05:20:55.921090 containerd[1571]: time="2025-07-15T05:20:55.921067272Z" level=info msg="CreateContainer within sandbox \"e0a8b835ef569abb3a69a5e0219c5062a2e95c48ced98fef2eabb3bfd4493b07\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 05:20:55.929685 containerd[1571]: time="2025-07-15T05:20:55.929664271Z" level=info msg="Container 1b752c0d58ef092573d29c5b31732a1f7cb8f70c50247a1690938122b1dca8c7: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:55.938508 containerd[1571]: time="2025-07-15T05:20:55.938469147Z" level=info msg="CreateContainer within sandbox \"e0a8b835ef569abb3a69a5e0219c5062a2e95c48ced98fef2eabb3bfd4493b07\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1b752c0d58ef092573d29c5b31732a1f7cb8f70c50247a1690938122b1dca8c7\"" Jul 15 05:20:55.939046 containerd[1571]: time="2025-07-15T05:20:55.938956368Z" level=info msg="StartContainer for \"1b752c0d58ef092573d29c5b31732a1f7cb8f70c50247a1690938122b1dca8c7\"" Jul 15 05:20:55.940576 containerd[1571]: time="2025-07-15T05:20:55.940523409Z" level=info msg="connecting to shim 1b752c0d58ef092573d29c5b31732a1f7cb8f70c50247a1690938122b1dca8c7" address="unix:///run/containerd/s/e60642e2e387601a0e58320f6dc6a296f44f3534cda5a6e3d7aebfb0012a8ac8" protocol=ttrpc version=3 Jul 15 05:20:55.965638 systemd[1]: Started cri-containerd-1b752c0d58ef092573d29c5b31732a1f7cb8f70c50247a1690938122b1dca8c7.scope - libcontainer container 1b752c0d58ef092573d29c5b31732a1f7cb8f70c50247a1690938122b1dca8c7. Jul 15 05:20:55.972572 kubelet[2738]: E0715 05:20:55.972536 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.972572 kubelet[2738]: W0715 05:20:55.972566 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.972759 kubelet[2738]: E0715 05:20:55.972608 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.972887 kubelet[2738]: E0715 05:20:55.972864 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.972917 kubelet[2738]: W0715 05:20:55.972880 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.973533 kubelet[2738]: E0715 05:20:55.972918 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.973533 kubelet[2738]: E0715 05:20:55.973168 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.973533 kubelet[2738]: W0715 05:20:55.973175 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.973533 kubelet[2738]: E0715 05:20:55.973191 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.973533 kubelet[2738]: E0715 05:20:55.973422 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.973533 kubelet[2738]: W0715 05:20:55.973430 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.973533 kubelet[2738]: E0715 05:20:55.973446 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.973685 kubelet[2738]: E0715 05:20:55.973635 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.973685 kubelet[2738]: W0715 05:20:55.973645 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.973685 kubelet[2738]: E0715 05:20:55.973653 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.973849 kubelet[2738]: E0715 05:20:55.973829 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.973882 kubelet[2738]: W0715 05:20:55.973868 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.973915 kubelet[2738]: E0715 05:20:55.973885 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.974159 kubelet[2738]: E0715 05:20:55.974135 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.974159 kubelet[2738]: W0715 05:20:55.974151 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.974236 kubelet[2738]: E0715 05:20:55.974176 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.974470 kubelet[2738]: E0715 05:20:55.974435 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.974470 kubelet[2738]: W0715 05:20:55.974451 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.974547 kubelet[2738]: E0715 05:20:55.974483 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.975036 kubelet[2738]: E0715 05:20:55.975000 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.975036 kubelet[2738]: W0715 05:20:55.975025 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.975036 kubelet[2738]: E0715 05:20:55.975049 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.975313 kubelet[2738]: E0715 05:20:55.975273 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.975313 kubelet[2738]: W0715 05:20:55.975290 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.975463 kubelet[2738]: E0715 05:20:55.975384 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.975594 kubelet[2738]: E0715 05:20:55.975559 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.975594 kubelet[2738]: W0715 05:20:55.975574 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.975720 kubelet[2738]: E0715 05:20:55.975694 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.975775 kubelet[2738]: E0715 05:20:55.975769 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.975801 kubelet[2738]: W0715 05:20:55.975776 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.975801 kubelet[2738]: E0715 05:20:55.975791 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.976053 kubelet[2738]: E0715 05:20:55.976028 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.976053 kubelet[2738]: W0715 05:20:55.976043 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.976230 kubelet[2738]: E0715 05:20:55.976069 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.976439 kubelet[2738]: E0715 05:20:55.976422 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.976439 kubelet[2738]: W0715 05:20:55.976434 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.976513 kubelet[2738]: E0715 05:20:55.976445 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.976660 kubelet[2738]: E0715 05:20:55.976622 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.976660 kubelet[2738]: W0715 05:20:55.976629 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.976719 kubelet[2738]: E0715 05:20:55.976681 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.976899 kubelet[2738]: E0715 05:20:55.976875 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.976899 kubelet[2738]: W0715 05:20:55.976888 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.976955 kubelet[2738]: E0715 05:20:55.976903 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.977154 kubelet[2738]: E0715 05:20:55.977129 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.977154 kubelet[2738]: W0715 05:20:55.977145 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.977154 kubelet[2738]: E0715 05:20:55.977153 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:55.977577 kubelet[2738]: E0715 05:20:55.977552 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:20:55.977577 kubelet[2738]: W0715 05:20:55.977568 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:20:55.977577 kubelet[2738]: E0715 05:20:55.977575 2738 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:20:56.014925 containerd[1571]: time="2025-07-15T05:20:56.014777831Z" level=info msg="StartContainer for \"1b752c0d58ef092573d29c5b31732a1f7cb8f70c50247a1690938122b1dca8c7\" returns successfully" Jul 15 05:20:56.031554 systemd[1]: cri-containerd-1b752c0d58ef092573d29c5b31732a1f7cb8f70c50247a1690938122b1dca8c7.scope: Deactivated successfully. Jul 15 05:20:56.034072 containerd[1571]: time="2025-07-15T05:20:56.034029051Z" level=info msg="received exit event container_id:\"1b752c0d58ef092573d29c5b31732a1f7cb8f70c50247a1690938122b1dca8c7\" id:\"1b752c0d58ef092573d29c5b31732a1f7cb8f70c50247a1690938122b1dca8c7\" pid:3403 exited_at:{seconds:1752556856 nanos:33597609}" Jul 15 05:20:56.034228 containerd[1571]: time="2025-07-15T05:20:56.034194702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b752c0d58ef092573d29c5b31732a1f7cb8f70c50247a1690938122b1dca8c7\" id:\"1b752c0d58ef092573d29c5b31732a1f7cb8f70c50247a1690938122b1dca8c7\" pid:3403 exited_at:{seconds:1752556856 nanos:33597609}" Jul 15 05:20:56.064096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b752c0d58ef092573d29c5b31732a1f7cb8f70c50247a1690938122b1dca8c7-rootfs.mount: Deactivated successfully. Jul 15 05:20:56.842235 kubelet[2738]: I0715 05:20:56.841539 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:20:56.842235 kubelet[2738]: E0715 05:20:56.841887 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:56.845466 containerd[1571]: time="2025-07-15T05:20:56.844594629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 05:20:57.751640 kubelet[2738]: E0715 05:20:57.750721 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjjqr" podUID="2906217e-d60f-46de-8e0a-40e519ff8ae1" Jul 15 05:20:58.749051 containerd[1571]: time="2025-07-15T05:20:58.749005802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:58.749946 containerd[1571]: time="2025-07-15T05:20:58.749721510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 15 05:20:58.750493 containerd[1571]: time="2025-07-15T05:20:58.750460467Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:58.754288 containerd[1571]: time="2025-07-15T05:20:58.752553559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:58.754288 containerd[1571]: time="2025-07-15T05:20:58.753095535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 1.908437134s" Jul 15 05:20:58.754288 containerd[1571]: time="2025-07-15T05:20:58.753116163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 05:20:58.755143 containerd[1571]: time="2025-07-15T05:20:58.755092297Z" level=info msg="CreateContainer within sandbox \"e0a8b835ef569abb3a69a5e0219c5062a2e95c48ced98fef2eabb3bfd4493b07\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 05:20:58.769351 containerd[1571]: time="2025-07-15T05:20:58.767811313Z" level=info msg="Container 0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:58.775150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2785155829.mount: Deactivated successfully. Jul 15 05:20:58.780533 containerd[1571]: time="2025-07-15T05:20:58.780139727Z" level=info msg="CreateContainer within sandbox \"e0a8b835ef569abb3a69a5e0219c5062a2e95c48ced98fef2eabb3bfd4493b07\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf\"" Jul 15 05:20:58.781156 containerd[1571]: time="2025-07-15T05:20:58.781136758Z" level=info msg="StartContainer for \"0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf\"" Jul 15 05:20:58.782751 containerd[1571]: time="2025-07-15T05:20:58.782490504Z" level=info msg="connecting to shim 0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf" address="unix:///run/containerd/s/e60642e2e387601a0e58320f6dc6a296f44f3534cda5a6e3d7aebfb0012a8ac8" protocol=ttrpc version=3 Jul 15 05:20:58.813994 systemd[1]: Started cri-containerd-0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf.scope - libcontainer container 0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf. Jul 15 05:20:58.873406 containerd[1571]: time="2025-07-15T05:20:58.873368043Z" level=info msg="StartContainer for \"0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf\" returns successfully" Jul 15 05:20:59.366717 containerd[1571]: time="2025-07-15T05:20:59.366590380Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:20:59.369685 systemd[1]: cri-containerd-0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf.scope: Deactivated successfully. Jul 15 05:20:59.370021 systemd[1]: cri-containerd-0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf.scope: Consumed 508ms CPU time, 195M memory peak, 171.2M written to disk. Jul 15 05:20:59.371563 containerd[1571]: time="2025-07-15T05:20:59.371473565Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf\" id:\"0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf\" pid:3480 exited_at:{seconds:1752556859 nanos:371080612}" Jul 15 05:20:59.371627 containerd[1571]: time="2025-07-15T05:20:59.371486084Z" level=info msg="received exit event container_id:\"0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf\" id:\"0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf\" pid:3480 exited_at:{seconds:1752556859 nanos:371080612}" Jul 15 05:20:59.384641 kubelet[2738]: I0715 05:20:59.384622 2738 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 05:20:59.405946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0516838cace5cd111e228ae56d2a854bccd08b5e41c4809a001441341aee9dcf-rootfs.mount: Deactivated successfully. Jul 15 05:20:59.465307 systemd[1]: Created slice kubepods-besteffort-pod844319f9_6aba_429e_a787_913ecb604968.slice - libcontainer container kubepods-besteffort-pod844319f9_6aba_429e_a787_913ecb604968.slice. Jul 15 05:20:59.481455 systemd[1]: Created slice kubepods-burstable-pod0741ab0b_0f47_4e25_bc74_000420332dfe.slice - libcontainer container kubepods-burstable-pod0741ab0b_0f47_4e25_bc74_000420332dfe.slice. Jul 15 05:20:59.490667 systemd[1]: Created slice kubepods-besteffort-pod977684f5_8c03_4eaa_86e9_f712519d6004.slice - libcontainer container kubepods-besteffort-pod977684f5_8c03_4eaa_86e9_f712519d6004.slice. Jul 15 05:20:59.500334 systemd[1]: Created slice kubepods-burstable-pod6bf14037_0af1_4def_a60f_4ed667c2ddc4.slice - libcontainer container kubepods-burstable-pod6bf14037_0af1_4def_a60f_4ed667c2ddc4.slice. Jul 15 05:20:59.510044 systemd[1]: Created slice kubepods-besteffort-podd1d64b8b_3dbd_4eb7_b4e8_bd08cd407c51.slice - libcontainer container kubepods-besteffort-podd1d64b8b_3dbd_4eb7_b4e8_bd08cd407c51.slice. Jul 15 05:20:59.519879 systemd[1]: Created slice kubepods-besteffort-podcba66337_d964_4612_bd04_820a667c6818.slice - libcontainer container kubepods-besteffort-podcba66337_d964_4612_bd04_820a667c6818.slice. Jul 15 05:20:59.528297 systemd[1]: Created slice kubepods-besteffort-podfcda4639_02b0_430f_9884_a07dd33b2a1a.slice - libcontainer container kubepods-besteffort-podfcda4639_02b0_430f_9884_a07dd33b2a1a.slice. Jul 15 05:20:59.594938 kubelet[2738]: I0715 05:20:59.594884 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcda4639-02b0-430f-9884-a07dd33b2a1a-whisker-ca-bundle\") pod \"whisker-6596ccdd54-dsgnl\" (UID: \"fcda4639-02b0-430f-9884-a07dd33b2a1a\") " pod="calico-system/whisker-6596ccdd54-dsgnl" Jul 15 05:20:59.594938 kubelet[2738]: I0715 05:20:59.594919 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfjkh\" (UniqueName: \"kubernetes.io/projected/fcda4639-02b0-430f-9884-a07dd33b2a1a-kube-api-access-mfjkh\") pod \"whisker-6596ccdd54-dsgnl\" (UID: \"fcda4639-02b0-430f-9884-a07dd33b2a1a\") " pod="calico-system/whisker-6596ccdd54-dsgnl" Jul 15 05:20:59.594938 kubelet[2738]: I0715 05:20:59.594936 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqwfc\" (UniqueName: \"kubernetes.io/projected/cba66337-d964-4612-bd04-820a667c6818-kube-api-access-mqwfc\") pod \"calico-apiserver-74555f585c-q8hg6\" (UID: \"cba66337-d964-4612-bd04-820a667c6818\") " pod="calico-apiserver/calico-apiserver-74555f585c-q8hg6" Jul 15 05:20:59.595133 kubelet[2738]: I0715 05:20:59.594953 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/977684f5-8c03-4eaa-86e9-f712519d6004-calico-apiserver-certs\") pod \"calico-apiserver-74555f585c-mntkg\" (UID: \"977684f5-8c03-4eaa-86e9-f712519d6004\") " pod="calico-apiserver/calico-apiserver-74555f585c-mntkg" Jul 15 05:20:59.595133 kubelet[2738]: I0715 05:20:59.594968 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fcda4639-02b0-430f-9884-a07dd33b2a1a-whisker-backend-key-pair\") pod \"whisker-6596ccdd54-dsgnl\" (UID: \"fcda4639-02b0-430f-9884-a07dd33b2a1a\") " pod="calico-system/whisker-6596ccdd54-dsgnl" Jul 15 05:20:59.595133 kubelet[2738]: I0715 05:20:59.594982 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51-config\") pod \"goldmane-58fd7646b9-ktdbq\" (UID: \"d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51\") " pod="calico-system/goldmane-58fd7646b9-ktdbq" Jul 15 05:20:59.595133 kubelet[2738]: I0715 05:20:59.594996 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bf14037-0af1-4def-a60f-4ed667c2ddc4-config-volume\") pod \"coredns-7c65d6cfc9-5mgz4\" (UID: \"6bf14037-0af1-4def-a60f-4ed667c2ddc4\") " pod="kube-system/coredns-7c65d6cfc9-5mgz4" Jul 15 05:20:59.595133 kubelet[2738]: I0715 05:20:59.595010 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlcqz\" (UniqueName: \"kubernetes.io/projected/6bf14037-0af1-4def-a60f-4ed667c2ddc4-kube-api-access-rlcqz\") pod \"coredns-7c65d6cfc9-5mgz4\" (UID: \"6bf14037-0af1-4def-a60f-4ed667c2ddc4\") " pod="kube-system/coredns-7c65d6cfc9-5mgz4" Jul 15 05:20:59.595271 kubelet[2738]: I0715 05:20:59.595026 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmrtb\" (UniqueName: \"kubernetes.io/projected/d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51-kube-api-access-bmrtb\") pod \"goldmane-58fd7646b9-ktdbq\" (UID: \"d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51\") " pod="calico-system/goldmane-58fd7646b9-ktdbq" Jul 15 05:20:59.595271 kubelet[2738]: I0715 05:20:59.595038 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/844319f9-6aba-429e-a787-913ecb604968-tigera-ca-bundle\") pod \"calico-kube-controllers-5d5cd7bbcf-8pzcc\" (UID: \"844319f9-6aba-429e-a787-913ecb604968\") " pod="calico-system/calico-kube-controllers-5d5cd7bbcf-8pzcc" Jul 15 05:20:59.595271 kubelet[2738]: I0715 05:20:59.595052 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vtqj\" (UniqueName: \"kubernetes.io/projected/844319f9-6aba-429e-a787-913ecb604968-kube-api-access-5vtqj\") pod \"calico-kube-controllers-5d5cd7bbcf-8pzcc\" (UID: \"844319f9-6aba-429e-a787-913ecb604968\") " pod="calico-system/calico-kube-controllers-5d5cd7bbcf-8pzcc" Jul 15 05:20:59.595271 kubelet[2738]: I0715 05:20:59.595068 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79vks\" (UniqueName: \"kubernetes.io/projected/0741ab0b-0f47-4e25-bc74-000420332dfe-kube-api-access-79vks\") pod \"coredns-7c65d6cfc9-j2s8z\" (UID: \"0741ab0b-0f47-4e25-bc74-000420332dfe\") " pod="kube-system/coredns-7c65d6cfc9-j2s8z" Jul 15 05:20:59.595271 kubelet[2738]: I0715 05:20:59.595083 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51-goldmane-key-pair\") pod \"goldmane-58fd7646b9-ktdbq\" (UID: \"d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51\") " pod="calico-system/goldmane-58fd7646b9-ktdbq" Jul 15 05:20:59.595380 kubelet[2738]: I0715 05:20:59.595098 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkprx\" (UniqueName: \"kubernetes.io/projected/977684f5-8c03-4eaa-86e9-f712519d6004-kube-api-access-fkprx\") pod \"calico-apiserver-74555f585c-mntkg\" (UID: \"977684f5-8c03-4eaa-86e9-f712519d6004\") " pod="calico-apiserver/calico-apiserver-74555f585c-mntkg" Jul 15 05:20:59.595380 kubelet[2738]: I0715 05:20:59.595111 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-ktdbq\" (UID: \"d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51\") " pod="calico-system/goldmane-58fd7646b9-ktdbq" Jul 15 05:20:59.595380 kubelet[2738]: I0715 05:20:59.595126 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0741ab0b-0f47-4e25-bc74-000420332dfe-config-volume\") pod \"coredns-7c65d6cfc9-j2s8z\" (UID: \"0741ab0b-0f47-4e25-bc74-000420332dfe\") " pod="kube-system/coredns-7c65d6cfc9-j2s8z" Jul 15 05:20:59.595380 kubelet[2738]: I0715 05:20:59.595140 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cba66337-d964-4612-bd04-820a667c6818-calico-apiserver-certs\") pod \"calico-apiserver-74555f585c-q8hg6\" (UID: \"cba66337-d964-4612-bd04-820a667c6818\") " pod="calico-apiserver/calico-apiserver-74555f585c-q8hg6" Jul 15 05:20:59.756771 systemd[1]: Created slice kubepods-besteffort-pod2906217e_d60f_46de_8e0a_40e519ff8ae1.slice - libcontainer container kubepods-besteffort-pod2906217e_d60f_46de_8e0a_40e519ff8ae1.slice. Jul 15 05:20:59.759844 containerd[1571]: time="2025-07-15T05:20:59.759814924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjjqr,Uid:2906217e-d60f-46de-8e0a-40e519ff8ae1,Namespace:calico-system,Attempt:0,}" Jul 15 05:20:59.778659 containerd[1571]: time="2025-07-15T05:20:59.778618774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d5cd7bbcf-8pzcc,Uid:844319f9-6aba-429e-a787-913ecb604968,Namespace:calico-system,Attempt:0,}" Jul 15 05:20:59.786329 kubelet[2738]: E0715 05:20:59.785974 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:59.789216 containerd[1571]: time="2025-07-15T05:20:59.788939064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j2s8z,Uid:0741ab0b-0f47-4e25-bc74-000420332dfe,Namespace:kube-system,Attempt:0,}" Jul 15 05:20:59.798077 containerd[1571]: time="2025-07-15T05:20:59.797959554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74555f585c-mntkg,Uid:977684f5-8c03-4eaa-86e9-f712519d6004,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:20:59.814840 kubelet[2738]: E0715 05:20:59.814555 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:20:59.816760 containerd[1571]: time="2025-07-15T05:20:59.816601919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5mgz4,Uid:6bf14037-0af1-4def-a60f-4ed667c2ddc4,Namespace:kube-system,Attempt:0,}" Jul 15 05:20:59.819303 containerd[1571]: time="2025-07-15T05:20:59.819075709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ktdbq,Uid:d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51,Namespace:calico-system,Attempt:0,}" Jul 15 05:20:59.825456 containerd[1571]: time="2025-07-15T05:20:59.825435417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74555f585c-q8hg6,Uid:cba66337-d964-4612-bd04-820a667c6818,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:20:59.835075 containerd[1571]: time="2025-07-15T05:20:59.835053782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6596ccdd54-dsgnl,Uid:fcda4639-02b0-430f-9884-a07dd33b2a1a,Namespace:calico-system,Attempt:0,}" Jul 15 05:20:59.883185 containerd[1571]: time="2025-07-15T05:20:59.882357830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 05:20:59.929301 containerd[1571]: time="2025-07-15T05:20:59.929265844Z" level=error msg="Failed to destroy network for sandbox \"314b469498d0691b0b6167e68f49f042f800bfc330bdb3f2cc36345cfec0f75a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:20:59.931880 containerd[1571]: time="2025-07-15T05:20:59.931848434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjjqr,Uid:2906217e-d60f-46de-8e0a-40e519ff8ae1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"314b469498d0691b0b6167e68f49f042f800bfc330bdb3f2cc36345cfec0f75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:20:59.932539 kubelet[2738]: E0715 05:20:59.932231 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"314b469498d0691b0b6167e68f49f042f800bfc330bdb3f2cc36345cfec0f75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:20:59.932539 kubelet[2738]: E0715 05:20:59.932305 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"314b469498d0691b0b6167e68f49f042f800bfc330bdb3f2cc36345cfec0f75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjjqr" Jul 15 05:20:59.932539 kubelet[2738]: E0715 05:20:59.932326 2738 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"314b469498d0691b0b6167e68f49f042f800bfc330bdb3f2cc36345cfec0f75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjjqr" Jul 15 05:20:59.932655 kubelet[2738]: E0715 05:20:59.932365 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zjjqr_calico-system(2906217e-d60f-46de-8e0a-40e519ff8ae1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zjjqr_calico-system(2906217e-d60f-46de-8e0a-40e519ff8ae1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"314b469498d0691b0b6167e68f49f042f800bfc330bdb3f2cc36345cfec0f75a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjjqr" podUID="2906217e-d60f-46de-8e0a-40e519ff8ae1" Jul 15 05:20:59.980373 containerd[1571]: time="2025-07-15T05:20:59.980254859Z" level=error msg="Failed to destroy network for sandbox \"bf795558d96bb3124c0c76e1636b6f37537082847d17fdba37b87a113b71e848\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:20:59.983271 containerd[1571]: time="2025-07-15T05:20:59.983241611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74555f585c-q8hg6,Uid:cba66337-d964-4612-bd04-820a667c6818,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf795558d96bb3124c0c76e1636b6f37537082847d17fdba37b87a113b71e848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:20:59.984599 kubelet[2738]: E0715 05:20:59.984541 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf795558d96bb3124c0c76e1636b6f37537082847d17fdba37b87a113b71e848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:20:59.985001 kubelet[2738]: E0715 05:20:59.984966 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf795558d96bb3124c0c76e1636b6f37537082847d17fdba37b87a113b71e848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74555f585c-q8hg6" Jul 15 05:20:59.985052 kubelet[2738]: E0715 05:20:59.984996 2738 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf795558d96bb3124c0c76e1636b6f37537082847d17fdba37b87a113b71e848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74555f585c-q8hg6" Jul 15 05:20:59.985080 kubelet[2738]: E0715 05:20:59.985058 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74555f585c-q8hg6_calico-apiserver(cba66337-d964-4612-bd04-820a667c6818)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74555f585c-q8hg6_calico-apiserver(cba66337-d964-4612-bd04-820a667c6818)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf795558d96bb3124c0c76e1636b6f37537082847d17fdba37b87a113b71e848\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74555f585c-q8hg6" podUID="cba66337-d964-4612-bd04-820a667c6818" Jul 15 05:21:00.007808 containerd[1571]: time="2025-07-15T05:21:00.007709407Z" level=error msg="Failed to destroy network for sandbox \"3a8e7ecfc415f5b371c3da1ada2d5a47f4099b86b0ede37208cb1b263cc2849e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.010530 containerd[1571]: time="2025-07-15T05:21:00.010462317Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j2s8z,Uid:0741ab0b-0f47-4e25-bc74-000420332dfe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8e7ecfc415f5b371c3da1ada2d5a47f4099b86b0ede37208cb1b263cc2849e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.011195 kubelet[2738]: E0715 05:21:00.011069 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8e7ecfc415f5b371c3da1ada2d5a47f4099b86b0ede37208cb1b263cc2849e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.011324 kubelet[2738]: E0715 05:21:00.011297 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8e7ecfc415f5b371c3da1ada2d5a47f4099b86b0ede37208cb1b263cc2849e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-j2s8z" Jul 15 05:21:00.011354 kubelet[2738]: E0715 05:21:00.011324 2738 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8e7ecfc415f5b371c3da1ada2d5a47f4099b86b0ede37208cb1b263cc2849e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-j2s8z" Jul 15 05:21:00.011424 kubelet[2738]: E0715 05:21:00.011390 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-j2s8z_kube-system(0741ab0b-0f47-4e25-bc74-000420332dfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-j2s8z_kube-system(0741ab0b-0f47-4e25-bc74-000420332dfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a8e7ecfc415f5b371c3da1ada2d5a47f4099b86b0ede37208cb1b263cc2849e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-j2s8z" podUID="0741ab0b-0f47-4e25-bc74-000420332dfe" Jul 15 05:21:00.033860 containerd[1571]: time="2025-07-15T05:21:00.033827820Z" level=error msg="Failed to destroy network for sandbox \"3a77464dd7e45081801e17583ef309d4c8c54de0bc101ff7661b1b0f6da4bcbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.035455 containerd[1571]: time="2025-07-15T05:21:00.035079841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74555f585c-mntkg,Uid:977684f5-8c03-4eaa-86e9-f712519d6004,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a77464dd7e45081801e17583ef309d4c8c54de0bc101ff7661b1b0f6da4bcbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.035908 kubelet[2738]: E0715 05:21:00.035838 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a77464dd7e45081801e17583ef309d4c8c54de0bc101ff7661b1b0f6da4bcbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.036022 kubelet[2738]: E0715 05:21:00.035886 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a77464dd7e45081801e17583ef309d4c8c54de0bc101ff7661b1b0f6da4bcbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74555f585c-mntkg" Jul 15 05:21:00.036059 kubelet[2738]: E0715 05:21:00.036023 2738 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a77464dd7e45081801e17583ef309d4c8c54de0bc101ff7661b1b0f6da4bcbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74555f585c-mntkg" Jul 15 05:21:00.036680 kubelet[2738]: E0715 05:21:00.036344 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74555f585c-mntkg_calico-apiserver(977684f5-8c03-4eaa-86e9-f712519d6004)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74555f585c-mntkg_calico-apiserver(977684f5-8c03-4eaa-86e9-f712519d6004)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a77464dd7e45081801e17583ef309d4c8c54de0bc101ff7661b1b0f6da4bcbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74555f585c-mntkg" podUID="977684f5-8c03-4eaa-86e9-f712519d6004" Jul 15 05:21:00.040921 containerd[1571]: time="2025-07-15T05:21:00.040898084Z" level=error msg="Failed to destroy network for sandbox \"e9f5266edb010ed89638e42306e4fe6f381b743ddaae2d2a400fa794878b3838\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.041048 containerd[1571]: time="2025-07-15T05:21:00.040912343Z" level=error msg="Failed to destroy network for sandbox \"b990622c59e9036533d2192de85eadf6917150727da83148393f12024b5b5c4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.041876 containerd[1571]: time="2025-07-15T05:21:00.041849051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d5cd7bbcf-8pzcc,Uid:844319f9-6aba-429e-a787-913ecb604968,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f5266edb010ed89638e42306e4fe6f381b743ddaae2d2a400fa794878b3838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.043046 kubelet[2738]: E0715 05:21:00.043010 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f5266edb010ed89638e42306e4fe6f381b743ddaae2d2a400fa794878b3838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.043143 containerd[1571]: time="2025-07-15T05:21:00.042385275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5mgz4,Uid:6bf14037-0af1-4def-a60f-4ed667c2ddc4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b990622c59e9036533d2192de85eadf6917150727da83148393f12024b5b5c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.043244 kubelet[2738]: E0715 05:21:00.043225 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f5266edb010ed89638e42306e4fe6f381b743ddaae2d2a400fa794878b3838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d5cd7bbcf-8pzcc" Jul 15 05:21:00.043304 kubelet[2738]: E0715 05:21:00.043289 2738 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f5266edb010ed89638e42306e4fe6f381b743ddaae2d2a400fa794878b3838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d5cd7bbcf-8pzcc" Jul 15 05:21:00.043469 kubelet[2738]: E0715 05:21:00.043393 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d5cd7bbcf-8pzcc_calico-system(844319f9-6aba-429e-a787-913ecb604968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d5cd7bbcf-8pzcc_calico-system(844319f9-6aba-429e-a787-913ecb604968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9f5266edb010ed89638e42306e4fe6f381b743ddaae2d2a400fa794878b3838\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d5cd7bbcf-8pzcc" podUID="844319f9-6aba-429e-a787-913ecb604968" Jul 15 05:21:00.044391 kubelet[2738]: E0715 05:21:00.043587 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b990622c59e9036533d2192de85eadf6917150727da83148393f12024b5b5c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.044391 kubelet[2738]: E0715 05:21:00.043621 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b990622c59e9036533d2192de85eadf6917150727da83148393f12024b5b5c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5mgz4" Jul 15 05:21:00.044391 kubelet[2738]: E0715 05:21:00.043640 2738 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b990622c59e9036533d2192de85eadf6917150727da83148393f12024b5b5c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5mgz4" Jul 15 05:21:00.044479 kubelet[2738]: E0715 05:21:00.043680 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-5mgz4_kube-system(6bf14037-0af1-4def-a60f-4ed667c2ddc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-5mgz4_kube-system(6bf14037-0af1-4def-a60f-4ed667c2ddc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b990622c59e9036533d2192de85eadf6917150727da83148393f12024b5b5c4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5mgz4" podUID="6bf14037-0af1-4def-a60f-4ed667c2ddc4" Jul 15 05:21:00.052258 containerd[1571]: time="2025-07-15T05:21:00.052193349Z" level=error msg="Failed to destroy network for sandbox \"823d336893bb9288113686fb6e05b82c4d97ac21ac1fa1975bf382c1efe78f70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.053338 containerd[1571]: time="2025-07-15T05:21:00.053269896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6596ccdd54-dsgnl,Uid:fcda4639-02b0-430f-9884-a07dd33b2a1a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"823d336893bb9288113686fb6e05b82c4d97ac21ac1fa1975bf382c1efe78f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.053527 kubelet[2738]: E0715 05:21:00.053472 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823d336893bb9288113686fb6e05b82c4d97ac21ac1fa1975bf382c1efe78f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.053568 kubelet[2738]: E0715 05:21:00.053554 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823d336893bb9288113686fb6e05b82c4d97ac21ac1fa1975bf382c1efe78f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6596ccdd54-dsgnl" Jul 15 05:21:00.053596 kubelet[2738]: E0715 05:21:00.053569 2738 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823d336893bb9288113686fb6e05b82c4d97ac21ac1fa1975bf382c1efe78f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6596ccdd54-dsgnl" Jul 15 05:21:00.054397 kubelet[2738]: E0715 05:21:00.053751 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6596ccdd54-dsgnl_calico-system(fcda4639-02b0-430f-9884-a07dd33b2a1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6596ccdd54-dsgnl_calico-system(fcda4639-02b0-430f-9884-a07dd33b2a1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"823d336893bb9288113686fb6e05b82c4d97ac21ac1fa1975bf382c1efe78f70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6596ccdd54-dsgnl" podUID="fcda4639-02b0-430f-9884-a07dd33b2a1a" Jul 15 05:21:00.060727 containerd[1571]: time="2025-07-15T05:21:00.060699798Z" level=error msg="Failed to destroy network for sandbox \"c49f8c695efa56a89c6fd67383409d523aefe0f1553e3e28868eede2424c5e14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.061520 containerd[1571]: time="2025-07-15T05:21:00.061453602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ktdbq,Uid:d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c49f8c695efa56a89c6fd67383409d523aefe0f1553e3e28868eede2424c5e14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.061771 kubelet[2738]: E0715 05:21:00.061667 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c49f8c695efa56a89c6fd67383409d523aefe0f1553e3e28868eede2424c5e14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:21:00.061771 kubelet[2738]: E0715 05:21:00.061705 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c49f8c695efa56a89c6fd67383409d523aefe0f1553e3e28868eede2424c5e14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ktdbq" Jul 15 05:21:00.061771 kubelet[2738]: E0715 05:21:00.061726 2738 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c49f8c695efa56a89c6fd67383409d523aefe0f1553e3e28868eede2424c5e14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ktdbq" Jul 15 05:21:00.061876 kubelet[2738]: E0715 05:21:00.061763 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-ktdbq_calico-system(d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-ktdbq_calico-system(d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c49f8c695efa56a89c6fd67383409d523aefe0f1553e3e28868eede2424c5e14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-ktdbq" podUID="d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51" Jul 15 05:21:00.770795 systemd[1]: run-netns-cni\x2dc15dbd1f\x2d8dd9\x2d401e\x2df63f\x2d8a954a63610a.mount: Deactivated successfully. Jul 15 05:21:00.770907 systemd[1]: run-netns-cni\x2da47a26c0\x2dfd66\x2d8f98\x2d2119\x2d56a1dd2b09de.mount: Deactivated successfully. Jul 15 05:21:00.770974 systemd[1]: run-netns-cni\x2d965c361b\x2d3e5c\x2d21e0\x2d341f\x2dc21d84eebbe0.mount: Deactivated successfully. Jul 15 05:21:00.771034 systemd[1]: run-netns-cni\x2da4075555\x2d5526\x2d6a1b\x2d8bb3\x2d6c0ad666f385.mount: Deactivated successfully. Jul 15 05:21:00.771093 systemd[1]: run-netns-cni\x2d070e4ac3\x2d40c1\x2dde3f\x2d8a28\x2da3541cc00219.mount: Deactivated successfully. Jul 15 05:21:03.559218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2243605516.mount: Deactivated successfully. Jul 15 05:21:03.588837 containerd[1571]: time="2025-07-15T05:21:03.588786683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:03.589710 containerd[1571]: time="2025-07-15T05:21:03.589677539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 15 05:21:03.590274 containerd[1571]: time="2025-07-15T05:21:03.590213481Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:03.591459 containerd[1571]: time="2025-07-15T05:21:03.591438933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:03.592114 containerd[1571]: time="2025-07-15T05:21:03.591984434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 3.709593847s" Jul 15 05:21:03.592114 containerd[1571]: time="2025-07-15T05:21:03.592015852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 05:21:03.608282 containerd[1571]: time="2025-07-15T05:21:03.608240082Z" level=info msg="CreateContainer within sandbox \"e0a8b835ef569abb3a69a5e0219c5062a2e95c48ced98fef2eabb3bfd4493b07\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 05:21:03.621769 containerd[1571]: time="2025-07-15T05:21:03.621722717Z" level=info msg="Container 1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:03.626375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1012224685.mount: Deactivated successfully. Jul 15 05:21:03.635684 containerd[1571]: time="2025-07-15T05:21:03.635631162Z" level=info msg="CreateContainer within sandbox \"e0a8b835ef569abb3a69a5e0219c5062a2e95c48ced98fef2eabb3bfd4493b07\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851\"" Jul 15 05:21:03.636357 containerd[1571]: time="2025-07-15T05:21:03.636219000Z" level=info msg="StartContainer for \"1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851\"" Jul 15 05:21:03.637594 containerd[1571]: time="2025-07-15T05:21:03.637573603Z" level=info msg="connecting to shim 1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851" address="unix:///run/containerd/s/e60642e2e387601a0e58320f6dc6a296f44f3534cda5a6e3d7aebfb0012a8ac8" protocol=ttrpc version=3 Jul 15 05:21:03.679661 systemd[1]: Started cri-containerd-1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851.scope - libcontainer container 1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851. Jul 15 05:21:03.729798 containerd[1571]: time="2025-07-15T05:21:03.729757997Z" level=info msg="StartContainer for \"1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851\" returns successfully" Jul 15 05:21:03.817649 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 05:21:03.817748 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 05:21:03.944390 kubelet[2738]: I0715 05:21:03.944072 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jl9c8" podStartSLOduration=1.501990248 podStartE2EDuration="10.944052534s" podCreationTimestamp="2025-07-15 05:20:53 +0000 UTC" firstStartedPulling="2025-07-15 05:20:54.150864231 +0000 UTC m=+18.495844827" lastFinishedPulling="2025-07-15 05:21:03.592926517 +0000 UTC m=+27.937907113" observedRunningTime="2025-07-15 05:21:03.943479075 +0000 UTC m=+28.288459671" watchObservedRunningTime="2025-07-15 05:21:03.944052534 +0000 UTC m=+28.289033130" Jul 15 05:21:04.029776 kubelet[2738]: I0715 05:21:04.029039 2738 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfjkh\" (UniqueName: \"kubernetes.io/projected/fcda4639-02b0-430f-9884-a07dd33b2a1a-kube-api-access-mfjkh\") pod \"fcda4639-02b0-430f-9884-a07dd33b2a1a\" (UID: \"fcda4639-02b0-430f-9884-a07dd33b2a1a\") " Jul 15 05:21:04.029776 kubelet[2738]: I0715 05:21:04.029099 2738 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fcda4639-02b0-430f-9884-a07dd33b2a1a-whisker-backend-key-pair\") pod \"fcda4639-02b0-430f-9884-a07dd33b2a1a\" (UID: \"fcda4639-02b0-430f-9884-a07dd33b2a1a\") " Jul 15 05:21:04.029776 kubelet[2738]: I0715 05:21:04.029120 2738 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcda4639-02b0-430f-9884-a07dd33b2a1a-whisker-ca-bundle\") pod \"fcda4639-02b0-430f-9884-a07dd33b2a1a\" (UID: \"fcda4639-02b0-430f-9884-a07dd33b2a1a\") " Jul 15 05:21:04.036110 kubelet[2738]: I0715 05:21:04.035489 2738 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcda4639-02b0-430f-9884-a07dd33b2a1a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fcda4639-02b0-430f-9884-a07dd33b2a1a" (UID: "fcda4639-02b0-430f-9884-a07dd33b2a1a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 05:21:04.041002 kubelet[2738]: I0715 05:21:04.040945 2738 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcda4639-02b0-430f-9884-a07dd33b2a1a-kube-api-access-mfjkh" (OuterVolumeSpecName: "kube-api-access-mfjkh") pod "fcda4639-02b0-430f-9884-a07dd33b2a1a" (UID: "fcda4639-02b0-430f-9884-a07dd33b2a1a"). InnerVolumeSpecName "kube-api-access-mfjkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 05:21:04.041532 kubelet[2738]: I0715 05:21:04.041481 2738 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcda4639-02b0-430f-9884-a07dd33b2a1a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fcda4639-02b0-430f-9884-a07dd33b2a1a" (UID: "fcda4639-02b0-430f-9884-a07dd33b2a1a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 05:21:04.060544 containerd[1571]: time="2025-07-15T05:21:04.059457992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851\" id:\"d9787a7f1e5f6c0f45e7b0ad110301e35e784abd4dfa2b0dc59f1a8ec782fa94\" pid:3810 exit_status:1 exited_at:{seconds:1752556864 nanos:58646596}" Jul 15 05:21:04.130123 kubelet[2738]: I0715 05:21:04.130079 2738 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fcda4639-02b0-430f-9884-a07dd33b2a1a-whisker-backend-key-pair\") on node \"172-237-133-19\" DevicePath \"\"" Jul 15 05:21:04.130123 kubelet[2738]: I0715 05:21:04.130112 2738 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcda4639-02b0-430f-9884-a07dd33b2a1a-whisker-ca-bundle\") on node \"172-237-133-19\" DevicePath \"\"" Jul 15 05:21:04.130123 kubelet[2738]: I0715 05:21:04.130122 2738 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfjkh\" (UniqueName: \"kubernetes.io/projected/fcda4639-02b0-430f-9884-a07dd33b2a1a-kube-api-access-mfjkh\") on node \"172-237-133-19\" DevicePath \"\"" Jul 15 05:21:04.560206 systemd[1]: var-lib-kubelet-pods-fcda4639\x2d02b0\x2d430f\x2d9884\x2da07dd33b2a1a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmfjkh.mount: Deactivated successfully. Jul 15 05:21:04.560316 systemd[1]: var-lib-kubelet-pods-fcda4639\x2d02b0\x2d430f\x2d9884\x2da07dd33b2a1a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 05:21:04.908356 systemd[1]: Removed slice kubepods-besteffort-podfcda4639_02b0_430f_9884_a07dd33b2a1a.slice - libcontainer container kubepods-besteffort-podfcda4639_02b0_430f_9884_a07dd33b2a1a.slice. Jul 15 05:21:04.965042 systemd[1]: Created slice kubepods-besteffort-podcfc81e00_8a46_49bd_8744_c79354e2f6ae.slice - libcontainer container kubepods-besteffort-podcfc81e00_8a46_49bd_8744_c79354e2f6ae.slice. Jul 15 05:21:04.998278 containerd[1571]: time="2025-07-15T05:21:04.998240880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851\" id:\"399a34969ac835cba11a403e837aeb417d3e29579eb4704e4f8f27877e79e649\" pid:3846 exit_status:1 exited_at:{seconds:1752556864 nanos:997871955}" Jul 15 05:21:05.135845 kubelet[2738]: I0715 05:21:05.135792 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cfc81e00-8a46-49bd-8744-c79354e2f6ae-whisker-backend-key-pair\") pod \"whisker-56d548bbf8-mbbjm\" (UID: \"cfc81e00-8a46-49bd-8744-c79354e2f6ae\") " pod="calico-system/whisker-56d548bbf8-mbbjm" Jul 15 05:21:05.135845 kubelet[2738]: I0715 05:21:05.135834 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8p5r\" (UniqueName: \"kubernetes.io/projected/cfc81e00-8a46-49bd-8744-c79354e2f6ae-kube-api-access-l8p5r\") pod \"whisker-56d548bbf8-mbbjm\" (UID: \"cfc81e00-8a46-49bd-8744-c79354e2f6ae\") " pod="calico-system/whisker-56d548bbf8-mbbjm" Jul 15 05:21:05.136301 kubelet[2738]: I0715 05:21:05.135856 2738 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfc81e00-8a46-49bd-8744-c79354e2f6ae-whisker-ca-bundle\") pod \"whisker-56d548bbf8-mbbjm\" (UID: \"cfc81e00-8a46-49bd-8744-c79354e2f6ae\") " pod="calico-system/whisker-56d548bbf8-mbbjm" Jul 15 05:21:05.271733 containerd[1571]: time="2025-07-15T05:21:05.271689181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d548bbf8-mbbjm,Uid:cfc81e00-8a46-49bd-8744-c79354e2f6ae,Namespace:calico-system,Attempt:0,}" Jul 15 05:21:05.472675 systemd-networkd[1452]: cali64c44681423: Link UP Jul 15 05:21:05.475152 systemd-networkd[1452]: cali64c44681423: Gained carrier Jul 15 05:21:05.503828 containerd[1571]: 2025-07-15 05:21:05.326 [INFO][3919] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:21:05.503828 containerd[1571]: 2025-07-15 05:21:05.364 [INFO][3919] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-eth0 whisker-56d548bbf8- calico-system cfc81e00-8a46-49bd-8744-c79354e2f6ae 907 0 2025-07-15 05:21:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:56d548bbf8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-237-133-19 whisker-56d548bbf8-mbbjm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali64c44681423 [] [] }} ContainerID="c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" Namespace="calico-system" Pod="whisker-56d548bbf8-mbbjm" WorkloadEndpoint="172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-" Jul 15 05:21:05.503828 containerd[1571]: 2025-07-15 05:21:05.364 [INFO][3919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" Namespace="calico-system" Pod="whisker-56d548bbf8-mbbjm" WorkloadEndpoint="172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-eth0" Jul 15 05:21:05.503828 containerd[1571]: 2025-07-15 05:21:05.411 [INFO][3958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" HandleID="k8s-pod-network.c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" Workload="172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-eth0" Jul 15 05:21:05.504038 containerd[1571]: 2025-07-15 05:21:05.412 [INFO][3958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" HandleID="k8s-pod-network.c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" Workload="172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-133-19", "pod":"whisker-56d548bbf8-mbbjm", "timestamp":"2025-07-15 05:21:05.41124238 +0000 UTC"}, Hostname:"172-237-133-19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:21:05.504038 containerd[1571]: 2025-07-15 05:21:05.412 [INFO][3958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:21:05.504038 containerd[1571]: 2025-07-15 05:21:05.412 [INFO][3958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:21:05.504038 containerd[1571]: 2025-07-15 05:21:05.412 [INFO][3958] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-19' Jul 15 05:21:05.504038 containerd[1571]: 2025-07-15 05:21:05.421 [INFO][3958] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" host="172-237-133-19" Jul 15 05:21:05.504038 containerd[1571]: 2025-07-15 05:21:05.426 [INFO][3958] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-19" Jul 15 05:21:05.504038 containerd[1571]: 2025-07-15 05:21:05.431 [INFO][3958] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:05.504038 containerd[1571]: 2025-07-15 05:21:05.433 [INFO][3958] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:05.504038 containerd[1571]: 2025-07-15 05:21:05.436 [INFO][3958] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:05.504038 containerd[1571]: 2025-07-15 05:21:05.436 [INFO][3958] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" host="172-237-133-19" Jul 15 05:21:05.504244 containerd[1571]: 2025-07-15 05:21:05.437 [INFO][3958] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5 Jul 15 05:21:05.504244 containerd[1571]: 2025-07-15 05:21:05.442 [INFO][3958] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" host="172-237-133-19" Jul 15 05:21:05.504244 containerd[1571]: 2025-07-15 05:21:05.446 [INFO][3958] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.129/26] block=192.168.21.128/26 handle="k8s-pod-network.c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" host="172-237-133-19" Jul 15 05:21:05.504244 containerd[1571]: 2025-07-15 05:21:05.447 [INFO][3958] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.129/26] handle="k8s-pod-network.c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" host="172-237-133-19" Jul 15 05:21:05.504244 containerd[1571]: 2025-07-15 05:21:05.447 [INFO][3958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:21:05.504244 containerd[1571]: 2025-07-15 05:21:05.447 [INFO][3958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.129/26] IPv6=[] ContainerID="c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" HandleID="k8s-pod-network.c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" Workload="172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-eth0" Jul 15 05:21:05.504361 containerd[1571]: 2025-07-15 05:21:05.455 [INFO][3919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" Namespace="calico-system" Pod="whisker-56d548bbf8-mbbjm" WorkloadEndpoint="172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-eth0", GenerateName:"whisker-56d548bbf8-", Namespace:"calico-system", SelfLink:"", UID:"cfc81e00-8a46-49bd-8744-c79354e2f6ae", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56d548bbf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"", Pod:"whisker-56d548bbf8-mbbjm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali64c44681423", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:05.504361 containerd[1571]: 2025-07-15 05:21:05.455 [INFO][3919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.129/32] ContainerID="c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" Namespace="calico-system" Pod="whisker-56d548bbf8-mbbjm" WorkloadEndpoint="172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-eth0" Jul 15 05:21:05.504430 containerd[1571]: 2025-07-15 05:21:05.455 [INFO][3919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64c44681423 ContainerID="c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" Namespace="calico-system" Pod="whisker-56d548bbf8-mbbjm" WorkloadEndpoint="172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-eth0" Jul 15 05:21:05.504430 containerd[1571]: 2025-07-15 05:21:05.477 [INFO][3919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" Namespace="calico-system" Pod="whisker-56d548bbf8-mbbjm" WorkloadEndpoint="172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-eth0" Jul 15 05:21:05.504471 containerd[1571]: 2025-07-15 05:21:05.477 [INFO][3919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" Namespace="calico-system" Pod="whisker-56d548bbf8-mbbjm" WorkloadEndpoint="172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-eth0", GenerateName:"whisker-56d548bbf8-", Namespace:"calico-system", SelfLink:"", UID:"cfc81e00-8a46-49bd-8744-c79354e2f6ae", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56d548bbf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5", Pod:"whisker-56d548bbf8-mbbjm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali64c44681423", MAC:"a6:f5:ce:11:ef:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:05.506361 containerd[1571]: 2025-07-15 05:21:05.491 [INFO][3919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" Namespace="calico-system" Pod="whisker-56d548bbf8-mbbjm" WorkloadEndpoint="172--237--133--19-k8s-whisker--56d548bbf8--mbbjm-eth0" Jul 15 05:21:05.556730 containerd[1571]: time="2025-07-15T05:21:05.556580287Z" level=info msg="connecting to shim c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5" address="unix:///run/containerd/s/f72a0653a97270e496cfce430f1112fa5d22d0f9a7b019300405156df00c4880" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:21:05.609989 systemd[1]: Started cri-containerd-c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5.scope - libcontainer container c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5. Jul 15 05:21:05.667276 containerd[1571]: time="2025-07-15T05:21:05.667244358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d548bbf8-mbbjm,Uid:cfc81e00-8a46-49bd-8744-c79354e2f6ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5\"" Jul 15 05:21:05.669554 containerd[1571]: time="2025-07-15T05:21:05.669295980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 05:21:05.754922 kubelet[2738]: I0715 05:21:05.754846 2738 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcda4639-02b0-430f-9884-a07dd33b2a1a" path="/var/lib/kubelet/pods/fcda4639-02b0-430f-9884-a07dd33b2a1a/volumes" Jul 15 05:21:06.732165 containerd[1571]: time="2025-07-15T05:21:06.732074643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:06.733422 containerd[1571]: time="2025-07-15T05:21:06.733397326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 15 05:21:06.734233 containerd[1571]: time="2025-07-15T05:21:06.734111874Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:06.736819 containerd[1571]: time="2025-07-15T05:21:06.736791566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:06.737934 containerd[1571]: time="2025-07-15T05:21:06.737679715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.068113481s" Jul 15 05:21:06.737934 containerd[1571]: time="2025-07-15T05:21:06.737705443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 05:21:06.742056 containerd[1571]: time="2025-07-15T05:21:06.742039359Z" level=info msg="CreateContainer within sandbox \"c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 05:21:06.753691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2246704410.mount: Deactivated successfully. Jul 15 05:21:06.753918 containerd[1571]: time="2025-07-15T05:21:06.753705294Z" level=info msg="Container ce3a52b8176863c0dfd17e137b6b47bf4cd6053e604a9cde5b9a2ed16b763db6: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:06.763005 containerd[1571]: time="2025-07-15T05:21:06.762413173Z" level=info msg="CreateContainer within sandbox \"c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ce3a52b8176863c0dfd17e137b6b47bf4cd6053e604a9cde5b9a2ed16b763db6\"" Jul 15 05:21:06.763599 containerd[1571]: time="2025-07-15T05:21:06.763454712Z" level=info msg="StartContainer for \"ce3a52b8176863c0dfd17e137b6b47bf4cd6053e604a9cde5b9a2ed16b763db6\"" Jul 15 05:21:06.765849 containerd[1571]: time="2025-07-15T05:21:06.765687511Z" level=info msg="connecting to shim ce3a52b8176863c0dfd17e137b6b47bf4cd6053e604a9cde5b9a2ed16b763db6" address="unix:///run/containerd/s/f72a0653a97270e496cfce430f1112fa5d22d0f9a7b019300405156df00c4880" protocol=ttrpc version=3 Jul 15 05:21:06.798687 systemd[1]: Started cri-containerd-ce3a52b8176863c0dfd17e137b6b47bf4cd6053e604a9cde5b9a2ed16b763db6.scope - libcontainer container ce3a52b8176863c0dfd17e137b6b47bf4cd6053e604a9cde5b9a2ed16b763db6. Jul 15 05:21:06.852623 containerd[1571]: time="2025-07-15T05:21:06.852561622Z" level=info msg="StartContainer for \"ce3a52b8176863c0dfd17e137b6b47bf4cd6053e604a9cde5b9a2ed16b763db6\" returns successfully" Jul 15 05:21:06.855831 containerd[1571]: time="2025-07-15T05:21:06.855774414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 05:21:07.164723 systemd-networkd[1452]: cali64c44681423: Gained IPv6LL Jul 15 05:21:08.630653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount984297349.mount: Deactivated successfully. Jul 15 05:21:08.642538 containerd[1571]: time="2025-07-15T05:21:08.642482077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:08.643387 containerd[1571]: time="2025-07-15T05:21:08.643171262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 15 05:21:08.644080 containerd[1571]: time="2025-07-15T05:21:08.644050056Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:08.645974 containerd[1571]: time="2025-07-15T05:21:08.645942029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:08.646641 containerd[1571]: time="2025-07-15T05:21:08.646613645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.790787474s" Jul 15 05:21:08.646723 containerd[1571]: time="2025-07-15T05:21:08.646707110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 05:21:08.649229 containerd[1571]: time="2025-07-15T05:21:08.649200212Z" level=info msg="CreateContainer within sandbox \"c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 05:21:08.658601 containerd[1571]: time="2025-07-15T05:21:08.656765053Z" level=info msg="Container 88d40051274c9716009aeaf6c4eca315f808adeb14744a2d233f5506791d085a: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:08.660958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866882967.mount: Deactivated successfully. Jul 15 05:21:08.669846 containerd[1571]: time="2025-07-15T05:21:08.669808643Z" level=info msg="CreateContainer within sandbox \"c9de04b5de8535dd930fe2505f6772c20fd929b3d9f01d48d401c3548a2792c5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"88d40051274c9716009aeaf6c4eca315f808adeb14744a2d233f5506791d085a\"" Jul 15 05:21:08.671823 containerd[1571]: time="2025-07-15T05:21:08.671111506Z" level=info msg="StartContainer for \"88d40051274c9716009aeaf6c4eca315f808adeb14744a2d233f5506791d085a\"" Jul 15 05:21:08.672292 containerd[1571]: time="2025-07-15T05:21:08.672272766Z" level=info msg="connecting to shim 88d40051274c9716009aeaf6c4eca315f808adeb14744a2d233f5506791d085a" address="unix:///run/containerd/s/f72a0653a97270e496cfce430f1112fa5d22d0f9a7b019300405156df00c4880" protocol=ttrpc version=3 Jul 15 05:21:08.693762 systemd[1]: Started cri-containerd-88d40051274c9716009aeaf6c4eca315f808adeb14744a2d233f5506791d085a.scope - libcontainer container 88d40051274c9716009aeaf6c4eca315f808adeb14744a2d233f5506791d085a. Jul 15 05:21:08.745823 containerd[1571]: time="2025-07-15T05:21:08.745784388Z" level=info msg="StartContainer for \"88d40051274c9716009aeaf6c4eca315f808adeb14744a2d233f5506791d085a\" returns successfully" Jul 15 05:21:10.750257 containerd[1571]: time="2025-07-15T05:21:10.750190335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d5cd7bbcf-8pzcc,Uid:844319f9-6aba-429e-a787-913ecb604968,Namespace:calico-system,Attempt:0,}" Jul 15 05:21:10.841104 systemd-networkd[1452]: calicb79c8bfc5e: Link UP Jul 15 05:21:10.841400 systemd-networkd[1452]: calicb79c8bfc5e: Gained carrier Jul 15 05:21:10.852361 kubelet[2738]: I0715 05:21:10.851151 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-56d548bbf8-mbbjm" podStartSLOduration=3.87237182 podStartE2EDuration="6.851131014s" podCreationTimestamp="2025-07-15 05:21:04 +0000 UTC" firstStartedPulling="2025-07-15 05:21:05.668698307 +0000 UTC m=+30.013678903" lastFinishedPulling="2025-07-15 05:21:08.647457501 +0000 UTC m=+32.992438097" observedRunningTime="2025-07-15 05:21:08.936608619 +0000 UTC m=+33.281589215" watchObservedRunningTime="2025-07-15 05:21:10.851131014 +0000 UTC m=+35.196111610" Jul 15 05:21:10.856810 containerd[1571]: 2025-07-15 05:21:10.777 [INFO][4194] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:21:10.856810 containerd[1571]: 2025-07-15 05:21:10.785 [INFO][4194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-eth0 calico-kube-controllers-5d5cd7bbcf- calico-system 844319f9-6aba-429e-a787-913ecb604968 834 0 2025-07-15 05:20:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d5cd7bbcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-237-133-19 calico-kube-controllers-5d5cd7bbcf-8pzcc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicb79c8bfc5e [] [] }} ContainerID="91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" Namespace="calico-system" Pod="calico-kube-controllers-5d5cd7bbcf-8pzcc" WorkloadEndpoint="172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-" Jul 15 05:21:10.856810 containerd[1571]: 2025-07-15 05:21:10.786 [INFO][4194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" Namespace="calico-system" Pod="calico-kube-controllers-5d5cd7bbcf-8pzcc" WorkloadEndpoint="172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-eth0" Jul 15 05:21:10.856810 containerd[1571]: 2025-07-15 05:21:10.808 [INFO][4207] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" HandleID="k8s-pod-network.91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" Workload="172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-eth0" Jul 15 05:21:10.856973 containerd[1571]: 2025-07-15 05:21:10.808 [INFO][4207] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" HandleID="k8s-pod-network.91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" Workload="172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-133-19", "pod":"calico-kube-controllers-5d5cd7bbcf-8pzcc", "timestamp":"2025-07-15 05:21:10.808235403 +0000 UTC"}, Hostname:"172-237-133-19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:21:10.856973 containerd[1571]: 2025-07-15 05:21:10.808 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:21:10.856973 containerd[1571]: 2025-07-15 05:21:10.808 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:21:10.856973 containerd[1571]: 2025-07-15 05:21:10.808 [INFO][4207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-19' Jul 15 05:21:10.856973 containerd[1571]: 2025-07-15 05:21:10.814 [INFO][4207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" host="172-237-133-19" Jul 15 05:21:10.856973 containerd[1571]: 2025-07-15 05:21:10.817 [INFO][4207] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-19" Jul 15 05:21:10.856973 containerd[1571]: 2025-07-15 05:21:10.820 [INFO][4207] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:10.856973 containerd[1571]: 2025-07-15 05:21:10.822 [INFO][4207] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:10.856973 containerd[1571]: 2025-07-15 05:21:10.823 [INFO][4207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:10.857151 containerd[1571]: 2025-07-15 05:21:10.823 [INFO][4207] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" host="172-237-133-19" Jul 15 05:21:10.857151 containerd[1571]: 2025-07-15 05:21:10.825 [INFO][4207] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7 Jul 15 05:21:10.857151 containerd[1571]: 2025-07-15 05:21:10.829 [INFO][4207] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" host="172-237-133-19" Jul 15 05:21:10.857151 containerd[1571]: 2025-07-15 05:21:10.834 [INFO][4207] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.130/26] block=192.168.21.128/26 handle="k8s-pod-network.91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" host="172-237-133-19" Jul 15 05:21:10.857151 containerd[1571]: 2025-07-15 05:21:10.834 [INFO][4207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.130/26] handle="k8s-pod-network.91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" host="172-237-133-19" Jul 15 05:21:10.857151 containerd[1571]: 2025-07-15 05:21:10.834 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:21:10.857151 containerd[1571]: 2025-07-15 05:21:10.834 [INFO][4207] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.130/26] IPv6=[] ContainerID="91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" HandleID="k8s-pod-network.91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" Workload="172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-eth0" Jul 15 05:21:10.857304 containerd[1571]: 2025-07-15 05:21:10.836 [INFO][4194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" Namespace="calico-system" Pod="calico-kube-controllers-5d5cd7bbcf-8pzcc" WorkloadEndpoint="172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-eth0", GenerateName:"calico-kube-controllers-5d5cd7bbcf-", Namespace:"calico-system", SelfLink:"", UID:"844319f9-6aba-429e-a787-913ecb604968", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d5cd7bbcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"", Pod:"calico-kube-controllers-5d5cd7bbcf-8pzcc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb79c8bfc5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:10.857356 containerd[1571]: 2025-07-15 05:21:10.836 [INFO][4194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.130/32] ContainerID="91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" Namespace="calico-system" Pod="calico-kube-controllers-5d5cd7bbcf-8pzcc" WorkloadEndpoint="172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-eth0" Jul 15 05:21:10.857356 containerd[1571]: 2025-07-15 05:21:10.836 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb79c8bfc5e ContainerID="91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" Namespace="calico-system" Pod="calico-kube-controllers-5d5cd7bbcf-8pzcc" WorkloadEndpoint="172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-eth0" Jul 15 05:21:10.857356 containerd[1571]: 2025-07-15 05:21:10.843 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" Namespace="calico-system" Pod="calico-kube-controllers-5d5cd7bbcf-8pzcc" WorkloadEndpoint="172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-eth0" Jul 15 05:21:10.857421 containerd[1571]: 2025-07-15 05:21:10.843 [INFO][4194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" Namespace="calico-system" Pod="calico-kube-controllers-5d5cd7bbcf-8pzcc" WorkloadEndpoint="172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-eth0", GenerateName:"calico-kube-controllers-5d5cd7bbcf-", Namespace:"calico-system", SelfLink:"", UID:"844319f9-6aba-429e-a787-913ecb604968", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d5cd7bbcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7", Pod:"calico-kube-controllers-5d5cd7bbcf-8pzcc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb79c8bfc5e", MAC:"12:4f:0d:d8:53:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:10.857468 containerd[1571]: 2025-07-15 05:21:10.850 [INFO][4194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" Namespace="calico-system" Pod="calico-kube-controllers-5d5cd7bbcf-8pzcc" WorkloadEndpoint="172--237--133--19-k8s-calico--kube--controllers--5d5cd7bbcf--8pzcc-eth0" Jul 15 05:21:10.879332 containerd[1571]: time="2025-07-15T05:21:10.878948913Z" level=info msg="connecting to shim 91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7" address="unix:///run/containerd/s/bba48f875ac58e8735ee1c8ad3ea84717e85eb92a929838c874f5e5a3b6b63aa" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:21:10.904628 systemd[1]: Started cri-containerd-91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7.scope - libcontainer container 91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7. Jul 15 05:21:10.954326 containerd[1571]: time="2025-07-15T05:21:10.954300266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d5cd7bbcf-8pzcc,Uid:844319f9-6aba-429e-a787-913ecb604968,Namespace:calico-system,Attempt:0,} returns sandbox id \"91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7\"" Jul 15 05:21:10.956776 containerd[1571]: time="2025-07-15T05:21:10.956704034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 05:21:12.551280 containerd[1571]: time="2025-07-15T05:21:12.551220629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:12.552143 containerd[1571]: time="2025-07-15T05:21:12.551941799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 15 05:21:12.552694 containerd[1571]: time="2025-07-15T05:21:12.552656719Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:12.554349 containerd[1571]: time="2025-07-15T05:21:12.554316672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:12.554841 containerd[1571]: time="2025-07-15T05:21:12.554809108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 1.598077895s" Jul 15 05:21:12.554915 containerd[1571]: time="2025-07-15T05:21:12.554900645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 05:21:12.572175 containerd[1571]: time="2025-07-15T05:21:12.572143371Z" level=info msg="CreateContainer within sandbox \"91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 05:21:12.579805 containerd[1571]: time="2025-07-15T05:21:12.579774567Z" level=info msg="Container 521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:12.583283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791895156.mount: Deactivated successfully. Jul 15 05:21:12.598480 containerd[1571]: time="2025-07-15T05:21:12.598437992Z" level=info msg="CreateContainer within sandbox \"91055de71870c66ebd75014df0a50e794b6509dc52baab9765e69f0be6e1fda7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47\"" Jul 15 05:21:12.599935 containerd[1571]: time="2025-07-15T05:21:12.599018045Z" level=info msg="StartContainer for \"521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47\"" Jul 15 05:21:12.600150 containerd[1571]: time="2025-07-15T05:21:12.600075975Z" level=info msg="connecting to shim 521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47" address="unix:///run/containerd/s/bba48f875ac58e8735ee1c8ad3ea84717e85eb92a929838c874f5e5a3b6b63aa" protocol=ttrpc version=3 Jul 15 05:21:12.625632 systemd[1]: Started cri-containerd-521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47.scope - libcontainer container 521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47. Jul 15 05:21:12.669131 systemd-networkd[1452]: calicb79c8bfc5e: Gained IPv6LL Jul 15 05:21:12.679827 containerd[1571]: time="2025-07-15T05:21:12.679401245Z" level=info msg="StartContainer for \"521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47\" returns successfully" Jul 15 05:21:12.944949 kubelet[2738]: I0715 05:21:12.943690 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d5cd7bbcf-8pzcc" podStartSLOduration=17.343601457 podStartE2EDuration="18.943675703s" podCreationTimestamp="2025-07-15 05:20:54 +0000 UTC" firstStartedPulling="2025-07-15 05:21:10.955670196 +0000 UTC m=+35.300650792" lastFinishedPulling="2025-07-15 05:21:12.555744442 +0000 UTC m=+36.900725038" observedRunningTime="2025-07-15 05:21:12.943588686 +0000 UTC m=+37.288569292" watchObservedRunningTime="2025-07-15 05:21:12.943675703 +0000 UTC m=+37.288656309" Jul 15 05:21:13.749942 kubelet[2738]: E0715 05:21:13.749536 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:21:13.750441 containerd[1571]: time="2025-07-15T05:21:13.750403892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j2s8z,Uid:0741ab0b-0f47-4e25-bc74-000420332dfe,Namespace:kube-system,Attempt:0,}" Jul 15 05:21:13.753098 containerd[1571]: time="2025-07-15T05:21:13.752947352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjjqr,Uid:2906217e-d60f-46de-8e0a-40e519ff8ae1,Namespace:calico-system,Attempt:0,}" Jul 15 05:21:13.875403 systemd-networkd[1452]: calia49700f6bf8: Link UP Jul 15 05:21:13.875940 systemd-networkd[1452]: calia49700f6bf8: Gained carrier Jul 15 05:21:13.893573 containerd[1571]: 2025-07-15 05:21:13.792 [INFO][4375] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:21:13.893573 containerd[1571]: 2025-07-15 05:21:13.804 [INFO][4375] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-eth0 coredns-7c65d6cfc9- kube-system 0741ab0b-0f47-4e25-bc74-000420332dfe 844 0 2025-07-15 05:20:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-133-19 coredns-7c65d6cfc9-j2s8z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia49700f6bf8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j2s8z" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-" Jul 15 05:21:13.893573 containerd[1571]: 2025-07-15 05:21:13.804 [INFO][4375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j2s8z" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-eth0" Jul 15 05:21:13.893573 containerd[1571]: 2025-07-15 05:21:13.835 [INFO][4400] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" HandleID="k8s-pod-network.6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" Workload="172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-eth0" Jul 15 05:21:13.893762 containerd[1571]: 2025-07-15 05:21:13.836 [INFO][4400] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" HandleID="k8s-pod-network.6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" Workload="172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f8f0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-133-19", "pod":"coredns-7c65d6cfc9-j2s8z", "timestamp":"2025-07-15 05:21:13.835727182 +0000 UTC"}, Hostname:"172-237-133-19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:21:13.893762 containerd[1571]: 2025-07-15 05:21:13.836 [INFO][4400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:21:13.893762 containerd[1571]: 2025-07-15 05:21:13.836 [INFO][4400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:21:13.893762 containerd[1571]: 2025-07-15 05:21:13.836 [INFO][4400] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-19' Jul 15 05:21:13.893762 containerd[1571]: 2025-07-15 05:21:13.843 [INFO][4400] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" host="172-237-133-19" Jul 15 05:21:13.893762 containerd[1571]: 2025-07-15 05:21:13.848 [INFO][4400] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-19" Jul 15 05:21:13.893762 containerd[1571]: 2025-07-15 05:21:13.853 [INFO][4400] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:13.893762 containerd[1571]: 2025-07-15 05:21:13.854 [INFO][4400] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:13.893762 containerd[1571]: 2025-07-15 05:21:13.856 [INFO][4400] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:13.893762 containerd[1571]: 2025-07-15 05:21:13.856 [INFO][4400] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" host="172-237-133-19" Jul 15 05:21:13.893969 containerd[1571]: 2025-07-15 05:21:13.858 [INFO][4400] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4 Jul 15 05:21:13.893969 containerd[1571]: 2025-07-15 05:21:13.862 [INFO][4400] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" host="172-237-133-19" Jul 15 05:21:13.893969 containerd[1571]: 2025-07-15 05:21:13.866 [INFO][4400] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.131/26] block=192.168.21.128/26 handle="k8s-pod-network.6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" host="172-237-133-19" Jul 15 05:21:13.893969 containerd[1571]: 2025-07-15 05:21:13.866 [INFO][4400] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.131/26] handle="k8s-pod-network.6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" host="172-237-133-19" Jul 15 05:21:13.893969 containerd[1571]: 2025-07-15 05:21:13.866 [INFO][4400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:21:13.893969 containerd[1571]: 2025-07-15 05:21:13.866 [INFO][4400] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.131/26] IPv6=[] ContainerID="6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" HandleID="k8s-pod-network.6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" Workload="172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-eth0" Jul 15 05:21:13.894086 containerd[1571]: 2025-07-15 05:21:13.869 [INFO][4375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j2s8z" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0741ab0b-0f47-4e25-bc74-000420332dfe", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"", Pod:"coredns-7c65d6cfc9-j2s8z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia49700f6bf8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:13.894086 containerd[1571]: 2025-07-15 05:21:13.870 [INFO][4375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.131/32] ContainerID="6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j2s8z" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-eth0" Jul 15 05:21:13.894086 containerd[1571]: 2025-07-15 05:21:13.870 [INFO][4375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia49700f6bf8 ContainerID="6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j2s8z" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-eth0" Jul 15 05:21:13.894086 containerd[1571]: 2025-07-15 05:21:13.875 [INFO][4375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j2s8z" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-eth0" Jul 15 05:21:13.894086 containerd[1571]: 2025-07-15 05:21:13.876 [INFO][4375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j2s8z" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0741ab0b-0f47-4e25-bc74-000420332dfe", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4", Pod:"coredns-7c65d6cfc9-j2s8z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia49700f6bf8", MAC:"d2:35:ac:80:5e:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:13.894086 containerd[1571]: 2025-07-15 05:21:13.889 [INFO][4375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j2s8z" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--j2s8z-eth0" Jul 15 05:21:13.925320 containerd[1571]: time="2025-07-15T05:21:13.924456441Z" level=info msg="connecting to shim 6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4" address="unix:///run/containerd/s/aa5df36611d6488ee2a6ae09fa0349f7a8e8fbb2880d64a4ad18d3d7007510fb" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:21:13.936009 kubelet[2738]: I0715 05:21:13.935783 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:21:13.975655 systemd[1]: Started cri-containerd-6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4.scope - libcontainer container 6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4. Jul 15 05:21:13.993623 systemd-networkd[1452]: cali62c342fdb31: Link UP Jul 15 05:21:13.995241 systemd-networkd[1452]: cali62c342fdb31: Gained carrier Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.795 [INFO][4376] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.808 [INFO][4376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--19-k8s-csi--node--driver--zjjqr-eth0 csi-node-driver- calico-system 2906217e-d60f-46de-8e0a-40e519ff8ae1 748 0 2025-07-15 05:20:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-237-133-19 csi-node-driver-zjjqr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali62c342fdb31 [] [] }} ContainerID="0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" Namespace="calico-system" Pod="csi-node-driver-zjjqr" WorkloadEndpoint="172--237--133--19-k8s-csi--node--driver--zjjqr-" Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.808 [INFO][4376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" Namespace="calico-system" Pod="csi-node-driver-zjjqr" WorkloadEndpoint="172--237--133--19-k8s-csi--node--driver--zjjqr-eth0" Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.844 [INFO][4405] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" HandleID="k8s-pod-network.0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" Workload="172--237--133--19-k8s-csi--node--driver--zjjqr-eth0" Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.845 [INFO][4405] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" HandleID="k8s-pod-network.0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" Workload="172--237--133--19-k8s-csi--node--driver--zjjqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5930), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-133-19", "pod":"csi-node-driver-zjjqr", "timestamp":"2025-07-15 05:21:13.844875693 +0000 UTC"}, Hostname:"172-237-133-19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.845 [INFO][4405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.866 [INFO][4405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.866 [INFO][4405] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-19' Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.945 [INFO][4405] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" host="172-237-133-19" Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.954 [INFO][4405] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-19" Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.959 [INFO][4405] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.963 [INFO][4405] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.969 [INFO][4405] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.969 [INFO][4405] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" host="172-237-133-19" Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.970 [INFO][4405] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.978 [INFO][4405] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" host="172-237-133-19" Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.983 [INFO][4405] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.132/26] block=192.168.21.128/26 handle="k8s-pod-network.0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" host="172-237-133-19" Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.983 [INFO][4405] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.132/26] handle="k8s-pod-network.0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" host="172-237-133-19" Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.983 [INFO][4405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:21:14.015872 containerd[1571]: 2025-07-15 05:21:13.983 [INFO][4405] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.132/26] IPv6=[] ContainerID="0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" HandleID="k8s-pod-network.0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" Workload="172--237--133--19-k8s-csi--node--driver--zjjqr-eth0" Jul 15 05:21:14.016359 containerd[1571]: 2025-07-15 05:21:13.988 [INFO][4376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" Namespace="calico-system" Pod="csi-node-driver-zjjqr" WorkloadEndpoint="172--237--133--19-k8s-csi--node--driver--zjjqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-csi--node--driver--zjjqr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2906217e-d60f-46de-8e0a-40e519ff8ae1", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"", Pod:"csi-node-driver-zjjqr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali62c342fdb31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:14.016359 containerd[1571]: 2025-07-15 05:21:13.988 [INFO][4376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.132/32] ContainerID="0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" Namespace="calico-system" Pod="csi-node-driver-zjjqr" WorkloadEndpoint="172--237--133--19-k8s-csi--node--driver--zjjqr-eth0" Jul 15 05:21:14.016359 containerd[1571]: 2025-07-15 05:21:13.988 [INFO][4376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62c342fdb31 ContainerID="0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" Namespace="calico-system" Pod="csi-node-driver-zjjqr" WorkloadEndpoint="172--237--133--19-k8s-csi--node--driver--zjjqr-eth0" Jul 15 05:21:14.016359 containerd[1571]: 2025-07-15 05:21:13.994 [INFO][4376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" Namespace="calico-system" Pod="csi-node-driver-zjjqr" WorkloadEndpoint="172--237--133--19-k8s-csi--node--driver--zjjqr-eth0" Jul 15 05:21:14.016359 containerd[1571]: 2025-07-15 05:21:13.996 [INFO][4376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" Namespace="calico-system" Pod="csi-node-driver-zjjqr" WorkloadEndpoint="172--237--133--19-k8s-csi--node--driver--zjjqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-csi--node--driver--zjjqr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2906217e-d60f-46de-8e0a-40e519ff8ae1", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd", Pod:"csi-node-driver-zjjqr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali62c342fdb31", MAC:"92:f4:82:d4:1c:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:14.016359 containerd[1571]: 2025-07-15 05:21:14.012 [INFO][4376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" Namespace="calico-system" Pod="csi-node-driver-zjjqr" WorkloadEndpoint="172--237--133--19-k8s-csi--node--driver--zjjqr-eth0" Jul 15 05:21:14.041943 containerd[1571]: time="2025-07-15T05:21:14.041899067Z" level=info msg="connecting to shim 0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd" address="unix:///run/containerd/s/7ee4a413951c09a5c605cf1832b7dfaa48caefce1fcffc45156c9c6106a2e148" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:21:14.057521 containerd[1571]: time="2025-07-15T05:21:14.057473034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j2s8z,Uid:0741ab0b-0f47-4e25-bc74-000420332dfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4\"" Jul 15 05:21:14.058158 kubelet[2738]: E0715 05:21:14.058127 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:21:14.064206 containerd[1571]: time="2025-07-15T05:21:14.064172386Z" level=info msg="CreateContainer within sandbox \"6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:21:14.073778 systemd[1]: Started cri-containerd-0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd.scope - libcontainer container 0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd. Jul 15 05:21:14.074296 containerd[1571]: time="2025-07-15T05:21:14.074166641Z" level=info msg="Container 2725c55618f7623dba43274e85f58e0d7b01c5e93171b6a26e5bbe65d0c246ba: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:14.079382 containerd[1571]: time="2025-07-15T05:21:14.079264096Z" level=info msg="CreateContainer within sandbox \"6c4611f4e061ec1e29fece2244b82d679fb0d373dd7282a13dccd402a78b98f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2725c55618f7623dba43274e85f58e0d7b01c5e93171b6a26e5bbe65d0c246ba\"" Jul 15 05:21:14.080527 containerd[1571]: time="2025-07-15T05:21:14.079853231Z" level=info msg="StartContainer for \"2725c55618f7623dba43274e85f58e0d7b01c5e93171b6a26e5bbe65d0c246ba\"" Jul 15 05:21:14.080527 containerd[1571]: time="2025-07-15T05:21:14.080474524Z" level=info msg="connecting to shim 2725c55618f7623dba43274e85f58e0d7b01c5e93171b6a26e5bbe65d0c246ba" address="unix:///run/containerd/s/aa5df36611d6488ee2a6ae09fa0349f7a8e8fbb2880d64a4ad18d3d7007510fb" protocol=ttrpc version=3 Jul 15 05:21:14.110719 systemd[1]: Started cri-containerd-2725c55618f7623dba43274e85f58e0d7b01c5e93171b6a26e5bbe65d0c246ba.scope - libcontainer container 2725c55618f7623dba43274e85f58e0d7b01c5e93171b6a26e5bbe65d0c246ba. Jul 15 05:21:14.113948 containerd[1571]: time="2025-07-15T05:21:14.113913688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjjqr,Uid:2906217e-d60f-46de-8e0a-40e519ff8ae1,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd\"" Jul 15 05:21:14.115913 containerd[1571]: time="2025-07-15T05:21:14.115893625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 05:21:14.147203 containerd[1571]: time="2025-07-15T05:21:14.147176886Z" level=info msg="StartContainer for \"2725c55618f7623dba43274e85f58e0d7b01c5e93171b6a26e5bbe65d0c246ba\" returns successfully" Jul 15 05:21:14.749953 kubelet[2738]: E0715 05:21:14.749912 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:21:14.751450 containerd[1571]: time="2025-07-15T05:21:14.751108654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5mgz4,Uid:6bf14037-0af1-4def-a60f-4ed667c2ddc4,Namespace:kube-system,Attempt:0,}" Jul 15 05:21:14.754640 containerd[1571]: time="2025-07-15T05:21:14.754372558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74555f585c-q8hg6,Uid:cba66337-d964-4612-bd04-820a667c6818,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:21:14.756869 containerd[1571]: time="2025-07-15T05:21:14.756803194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ktdbq,Uid:d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51,Namespace:calico-system,Attempt:0,}" Jul 15 05:21:14.943229 kubelet[2738]: E0715 05:21:14.943098 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:21:14.955654 systemd-networkd[1452]: cali7703d22decb: Link UP Jul 15 05:21:14.957655 systemd-networkd[1452]: cali7703d22decb: Gained carrier Jul 15 05:21:14.959040 kubelet[2738]: I0715 05:21:14.958931 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-j2s8z" podStartSLOduration=33.958914135 podStartE2EDuration="33.958914135s" podCreationTimestamp="2025-07-15 05:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:21:14.958204944 +0000 UTC m=+39.303185560" watchObservedRunningTime="2025-07-15 05:21:14.958914135 +0000 UTC m=+39.303894741" Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.819 [INFO][4581] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.840 [INFO][4581] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-eth0 coredns-7c65d6cfc9- kube-system 6bf14037-0af1-4def-a60f-4ed667c2ddc4 845 0 2025-07-15 05:20:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-133-19 coredns-7c65d6cfc9-5mgz4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7703d22decb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5mgz4" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-" Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.841 [INFO][4581] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5mgz4" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-eth0" Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.891 [INFO][4618] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" HandleID="k8s-pod-network.cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" Workload="172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-eth0" Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.891 [INFO][4618] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" HandleID="k8s-pod-network.cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" Workload="172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dfc60), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-133-19", "pod":"coredns-7c65d6cfc9-5mgz4", "timestamp":"2025-07-15 05:21:14.891670578 +0000 UTC"}, Hostname:"172-237-133-19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.892 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.892 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.892 [INFO][4618] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-19' Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.902 [INFO][4618] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" host="172-237-133-19" Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.910 [INFO][4618] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-19" Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.916 [INFO][4618] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.918 [INFO][4618] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.921 [INFO][4618] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.921 [INFO][4618] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" host="172-237-133-19" Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.923 [INFO][4618] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7 Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.928 [INFO][4618] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" host="172-237-133-19" Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.935 [INFO][4618] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.133/26] block=192.168.21.128/26 handle="k8s-pod-network.cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" host="172-237-133-19" Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.935 [INFO][4618] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.133/26] handle="k8s-pod-network.cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" host="172-237-133-19" Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.936 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:21:14.993711 containerd[1571]: 2025-07-15 05:21:14.936 [INFO][4618] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.133/26] IPv6=[] ContainerID="cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" HandleID="k8s-pod-network.cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" Workload="172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-eth0" Jul 15 05:21:14.994378 containerd[1571]: 2025-07-15 05:21:14.948 [INFO][4581] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5mgz4" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6bf14037-0af1-4def-a60f-4ed667c2ddc4", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"", Pod:"coredns-7c65d6cfc9-5mgz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7703d22decb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:14.994378 containerd[1571]: 2025-07-15 05:21:14.948 [INFO][4581] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.133/32] ContainerID="cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5mgz4" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-eth0" Jul 15 05:21:14.994378 containerd[1571]: 2025-07-15 05:21:14.948 [INFO][4581] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7703d22decb ContainerID="cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5mgz4" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-eth0" Jul 15 05:21:14.994378 containerd[1571]: 2025-07-15 05:21:14.962 [INFO][4581] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5mgz4" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-eth0" Jul 15 05:21:14.994378 containerd[1571]: 2025-07-15 05:21:14.965 [INFO][4581] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5mgz4" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6bf14037-0af1-4def-a60f-4ed667c2ddc4", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7", Pod:"coredns-7c65d6cfc9-5mgz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7703d22decb", MAC:"26:de:ba:fb:55:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:14.994378 containerd[1571]: 2025-07-15 05:21:14.989 [INFO][4581] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5mgz4" WorkloadEndpoint="172--237--133--19-k8s-coredns--7c65d6cfc9--5mgz4-eth0" Jul 15 05:21:15.033129 containerd[1571]: time="2025-07-15T05:21:15.033085584Z" level=info msg="connecting to shim cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7" address="unix:///run/containerd/s/d308130df5adc8783d2bc7b4ba06b80b551fd7782759c4c377c79dad3bac31d6" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:21:15.056767 containerd[1571]: time="2025-07-15T05:21:15.056654958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:15.059747 containerd[1571]: time="2025-07-15T05:21:15.059719738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 15 05:21:15.060904 containerd[1571]: time="2025-07-15T05:21:15.060192986Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:15.066464 containerd[1571]: time="2025-07-15T05:21:15.066002967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:15.070541 containerd[1571]: time="2025-07-15T05:21:15.066631721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 950.516842ms" Jul 15 05:21:15.071155 containerd[1571]: time="2025-07-15T05:21:15.071137945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 05:21:15.082857 containerd[1571]: time="2025-07-15T05:21:15.082422213Z" level=info msg="CreateContainer within sandbox \"0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 05:21:15.095372 systemd-networkd[1452]: calid16cb37a09e: Link UP Jul 15 05:21:15.096673 systemd-networkd[1452]: calid16cb37a09e: Gained carrier Jul 15 05:21:15.106738 containerd[1571]: time="2025-07-15T05:21:15.106710559Z" level=info msg="Container 50870cbb938980a67c1bcce808118866f7aeaf381de4337432d9a7d996dab67f: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:14.820 [INFO][4594] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:14.839 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-eth0 goldmane-58fd7646b9- calico-system d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51 846 0 2025-07-15 05:20:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-237-133-19 goldmane-58fd7646b9-ktdbq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid16cb37a09e [] [] }} ContainerID="a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" Namespace="calico-system" Pod="goldmane-58fd7646b9-ktdbq" WorkloadEndpoint="172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-" Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:14.839 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" Namespace="calico-system" Pod="goldmane-58fd7646b9-ktdbq" WorkloadEndpoint="172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-eth0" Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:14.894 [INFO][4620] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" HandleID="k8s-pod-network.a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" Workload="172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-eth0" Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:14.896 [INFO][4620] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" HandleID="k8s-pod-network.a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" Workload="172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7640), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-133-19", "pod":"goldmane-58fd7646b9-ktdbq", "timestamp":"2025-07-15 05:21:14.893964158 +0000 UTC"}, Hostname:"172-237-133-19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:14.896 [INFO][4620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:14.935 [INFO][4620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:14.936 [INFO][4620] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-19' Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:15.002 [INFO][4620] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" host="172-237-133-19" Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:15.012 [INFO][4620] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-19" Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:15.025 [INFO][4620] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:15.039 [INFO][4620] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:15.045 [INFO][4620] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:15.045 [INFO][4620] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" host="172-237-133-19" Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:15.048 [INFO][4620] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:15.054 [INFO][4620] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" host="172-237-133-19" Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:15.065 [INFO][4620] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.134/26] block=192.168.21.128/26 handle="k8s-pod-network.a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" host="172-237-133-19" Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:15.065 [INFO][4620] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.134/26] handle="k8s-pod-network.a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" host="172-237-133-19" Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:15.065 [INFO][4620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:21:15.119288 containerd[1571]: 2025-07-15 05:21:15.065 [INFO][4620] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.134/26] IPv6=[] ContainerID="a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" HandleID="k8s-pod-network.a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" Workload="172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-eth0" Jul 15 05:21:15.119927 containerd[1571]: 2025-07-15 05:21:15.091 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" Namespace="calico-system" Pod="goldmane-58fd7646b9-ktdbq" WorkloadEndpoint="172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"", Pod:"goldmane-58fd7646b9-ktdbq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid16cb37a09e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:15.119927 containerd[1571]: 2025-07-15 05:21:15.091 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.134/32] ContainerID="a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" Namespace="calico-system" Pod="goldmane-58fd7646b9-ktdbq" WorkloadEndpoint="172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-eth0" Jul 15 05:21:15.119927 containerd[1571]: 2025-07-15 05:21:15.091 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid16cb37a09e ContainerID="a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" Namespace="calico-system" Pod="goldmane-58fd7646b9-ktdbq" WorkloadEndpoint="172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-eth0" Jul 15 05:21:15.119927 containerd[1571]: 2025-07-15 05:21:15.098 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" Namespace="calico-system" Pod="goldmane-58fd7646b9-ktdbq" WorkloadEndpoint="172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-eth0" Jul 15 05:21:15.119927 containerd[1571]: 2025-07-15 05:21:15.098 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" Namespace="calico-system" Pod="goldmane-58fd7646b9-ktdbq" WorkloadEndpoint="172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b", Pod:"goldmane-58fd7646b9-ktdbq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid16cb37a09e", MAC:"e2:28:32:5b:71:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:15.119927 containerd[1571]: 2025-07-15 05:21:15.114 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" Namespace="calico-system" Pod="goldmane-58fd7646b9-ktdbq" WorkloadEndpoint="172--237--133--19-k8s-goldmane--58fd7646b9--ktdbq-eth0" Jul 15 05:21:15.133932 systemd[1]: Started cri-containerd-cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7.scope - libcontainer container cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7. Jul 15 05:21:15.136477 containerd[1571]: time="2025-07-15T05:21:15.136429493Z" level=info msg="CreateContainer within sandbox \"0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"50870cbb938980a67c1bcce808118866f7aeaf381de4337432d9a7d996dab67f\"" Jul 15 05:21:15.137618 containerd[1571]: time="2025-07-15T05:21:15.137146586Z" level=info msg="StartContainer for \"50870cbb938980a67c1bcce808118866f7aeaf381de4337432d9a7d996dab67f\"" Jul 15 05:21:15.140957 containerd[1571]: time="2025-07-15T05:21:15.140937347Z" level=info msg="connecting to shim 50870cbb938980a67c1bcce808118866f7aeaf381de4337432d9a7d996dab67f" address="unix:///run/containerd/s/7ee4a413951c09a5c605cf1832b7dfaa48caefce1fcffc45156c9c6106a2e148" protocol=ttrpc version=3 Jul 15 05:21:15.168681 containerd[1571]: time="2025-07-15T05:21:15.168138837Z" level=info msg="connecting to shim a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b" address="unix:///run/containerd/s/6732d1b2651c04ad2ae5bcf6033fea19e0828c50ca88014417f0ad2c659a859a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:21:15.202008 systemd[1]: Started cri-containerd-50870cbb938980a67c1bcce808118866f7aeaf381de4337432d9a7d996dab67f.scope - libcontainer container 50870cbb938980a67c1bcce808118866f7aeaf381de4337432d9a7d996dab67f. Jul 15 05:21:15.208758 systemd-networkd[1452]: cali994d963f684: Link UP Jul 15 05:21:15.212099 systemd-networkd[1452]: cali994d963f684: Gained carrier Jul 15 05:21:15.229605 systemd-networkd[1452]: calia49700f6bf8: Gained IPv6LL Jul 15 05:21:15.230674 systemd[1]: Started cri-containerd-a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b.scope - libcontainer container a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b. Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:14.822 [INFO][4585] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:14.846 [INFO][4585] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-eth0 calico-apiserver-74555f585c- calico-apiserver cba66337-d964-4612-bd04-820a667c6818 843 0 2025-07-15 05:20:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74555f585c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-133-19 calico-apiserver-74555f585c-q8hg6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali994d963f684 [] [] }} ContainerID="5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-q8hg6" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-" Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:14.846 [INFO][4585] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-q8hg6" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-eth0" Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:14.902 [INFO][4628] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" HandleID="k8s-pod-network.5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" Workload="172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-eth0" Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:14.903 [INFO][4628] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" HandleID="k8s-pod-network.5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" Workload="172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-133-19", "pod":"calico-apiserver-74555f585c-q8hg6", "timestamp":"2025-07-15 05:21:14.901917287 +0000 UTC"}, Hostname:"172-237-133-19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:14.903 [INFO][4628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.066 [INFO][4628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.066 [INFO][4628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-19' Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.118 [INFO][4628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" host="172-237-133-19" Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.133 [INFO][4628] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-19" Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.147 [INFO][4628] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.150 [INFO][4628] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.155 [INFO][4628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.155 [INFO][4628] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" host="172-237-133-19" Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.161 [INFO][4628] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.170 [INFO][4628] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" host="172-237-133-19" Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.180 [INFO][4628] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.135/26] block=192.168.21.128/26 handle="k8s-pod-network.5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" host="172-237-133-19" Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.180 [INFO][4628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.135/26] handle="k8s-pod-network.5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" host="172-237-133-19" Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.180 [INFO][4628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:21:15.246976 containerd[1571]: 2025-07-15 05:21:15.180 [INFO][4628] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.135/26] IPv6=[] ContainerID="5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" HandleID="k8s-pod-network.5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" Workload="172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-eth0" Jul 15 05:21:15.248201 containerd[1571]: 2025-07-15 05:21:15.196 [INFO][4585] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-q8hg6" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-eth0", GenerateName:"calico-apiserver-74555f585c-", Namespace:"calico-apiserver", SelfLink:"", UID:"cba66337-d964-4612-bd04-820a667c6818", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74555f585c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"", Pod:"calico-apiserver-74555f585c-q8hg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali994d963f684", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:15.248201 containerd[1571]: 2025-07-15 05:21:15.196 [INFO][4585] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.135/32] ContainerID="5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-q8hg6" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-eth0" Jul 15 05:21:15.248201 containerd[1571]: 2025-07-15 05:21:15.196 [INFO][4585] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali994d963f684 ContainerID="5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-q8hg6" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-eth0" Jul 15 05:21:15.248201 containerd[1571]: 2025-07-15 05:21:15.222 [INFO][4585] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-q8hg6" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-eth0" Jul 15 05:21:15.248201 containerd[1571]: 2025-07-15 05:21:15.222 [INFO][4585] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-q8hg6" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-eth0", GenerateName:"calico-apiserver-74555f585c-", Namespace:"calico-apiserver", SelfLink:"", UID:"cba66337-d964-4612-bd04-820a667c6818", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74555f585c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb", Pod:"calico-apiserver-74555f585c-q8hg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali994d963f684", MAC:"7e:dc:16:70:03:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:15.248201 containerd[1571]: 2025-07-15 05:21:15.243 [INFO][4585] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-q8hg6" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--q8hg6-eth0" Jul 15 05:21:15.293456 containerd[1571]: time="2025-07-15T05:21:15.293342354Z" level=info msg="connecting to shim 5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb" address="unix:///run/containerd/s/1f6b5aa518d2985cd57f26d6365c43cbf5a7ad517760104710d9b13d4e7f2056" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:21:15.298020 containerd[1571]: time="2025-07-15T05:21:15.297935586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5mgz4,Uid:6bf14037-0af1-4def-a60f-4ed667c2ddc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7\"" Jul 15 05:21:15.299389 kubelet[2738]: E0715 05:21:15.299192 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:21:15.302796 containerd[1571]: time="2025-07-15T05:21:15.302756462Z" level=info msg="CreateContainer within sandbox \"cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:21:15.315921 containerd[1571]: time="2025-07-15T05:21:15.315893834Z" level=info msg="Container 5b9ecee88d7e6d949368d749ad71a6daa236a6673cd02d3a26a0f069fb3a7a57: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:15.323112 containerd[1571]: time="2025-07-15T05:21:15.323078708Z" level=info msg="CreateContainer within sandbox \"cc2f37ad8e2d6429c29ae6c7e84f97b444aa6016701c400e5cb8b1b6cc0973c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b9ecee88d7e6d949368d749ad71a6daa236a6673cd02d3a26a0f069fb3a7a57\"" Jul 15 05:21:15.327077 containerd[1571]: time="2025-07-15T05:21:15.326861781Z" level=info msg="StartContainer for \"5b9ecee88d7e6d949368d749ad71a6daa236a6673cd02d3a26a0f069fb3a7a57\"" Jul 15 05:21:15.330471 containerd[1571]: time="2025-07-15T05:21:15.330389230Z" level=info msg="connecting to shim 5b9ecee88d7e6d949368d749ad71a6daa236a6673cd02d3a26a0f069fb3a7a57" address="unix:///run/containerd/s/d308130df5adc8783d2bc7b4ba06b80b551fd7782759c4c377c79dad3bac31d6" protocol=ttrpc version=3 Jul 15 05:21:15.338655 systemd[1]: Started cri-containerd-5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb.scope - libcontainer container 5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb. Jul 15 05:21:15.351323 containerd[1571]: time="2025-07-15T05:21:15.351285763Z" level=info msg="StartContainer for \"50870cbb938980a67c1bcce808118866f7aeaf381de4337432d9a7d996dab67f\" returns successfully" Jul 15 05:21:15.354851 containerd[1571]: time="2025-07-15T05:21:15.354827361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 05:21:15.364746 systemd[1]: Started cri-containerd-5b9ecee88d7e6d949368d749ad71a6daa236a6673cd02d3a26a0f069fb3a7a57.scope - libcontainer container 5b9ecee88d7e6d949368d749ad71a6daa236a6673cd02d3a26a0f069fb3a7a57. Jul 15 05:21:15.398068 containerd[1571]: time="2025-07-15T05:21:15.398024289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ktdbq,Uid:d1d64b8b-3dbd-4eb7-b4e8-bd08cd407c51,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b\"" Jul 15 05:21:15.439026 containerd[1571]: time="2025-07-15T05:21:15.438985074Z" level=info msg="StartContainer for \"5b9ecee88d7e6d949368d749ad71a6daa236a6673cd02d3a26a0f069fb3a7a57\" returns successfully" Jul 15 05:21:15.521322 containerd[1571]: time="2025-07-15T05:21:15.521287396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74555f585c-q8hg6,Uid:cba66337-d964-4612-bd04-820a667c6818,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb\"" Jul 15 05:21:15.740726 systemd-networkd[1452]: cali62c342fdb31: Gained IPv6LL Jul 15 05:21:15.750812 containerd[1571]: time="2025-07-15T05:21:15.750770707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74555f585c-mntkg,Uid:977684f5-8c03-4eaa-86e9-f712519d6004,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:21:15.849939 kubelet[2738]: I0715 05:21:15.849608 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:21:15.873839 systemd-networkd[1452]: cali23fde206963: Link UP Jul 15 05:21:15.875290 systemd-networkd[1452]: cali23fde206963: Gained carrier Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.785 [INFO][4885] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.794 [INFO][4885] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-eth0 calico-apiserver-74555f585c- calico-apiserver 977684f5-8c03-4eaa-86e9-f712519d6004 842 0 2025-07-15 05:20:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74555f585c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-133-19 calico-apiserver-74555f585c-mntkg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali23fde206963 [] [] }} ContainerID="78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-mntkg" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-" Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.795 [INFO][4885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-mntkg" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-eth0" Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.823 [INFO][4897] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" HandleID="k8s-pod-network.78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" Workload="172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-eth0" Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.823 [INFO][4897] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" HandleID="k8s-pod-network.78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" Workload="172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-133-19", "pod":"calico-apiserver-74555f585c-mntkg", "timestamp":"2025-07-15 05:21:15.823173894 +0000 UTC"}, Hostname:"172-237-133-19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.823 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.823 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.823 [INFO][4897] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-19' Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.833 [INFO][4897] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" host="172-237-133-19" Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.838 [INFO][4897] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-19" Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.842 [INFO][4897] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.844 [INFO][4897] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.846 [INFO][4897] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="172-237-133-19" Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.847 [INFO][4897] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" host="172-237-133-19" Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.848 [INFO][4897] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.852 [INFO][4897] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" host="172-237-133-19" Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.861 [INFO][4897] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.136/26] block=192.168.21.128/26 handle="k8s-pod-network.78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" host="172-237-133-19" Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.861 [INFO][4897] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.136/26] handle="k8s-pod-network.78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" host="172-237-133-19" Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.861 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:21:15.891697 containerd[1571]: 2025-07-15 05:21:15.861 [INFO][4897] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.136/26] IPv6=[] ContainerID="78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" HandleID="k8s-pod-network.78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" Workload="172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-eth0" Jul 15 05:21:15.892908 containerd[1571]: 2025-07-15 05:21:15.866 [INFO][4885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-mntkg" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-eth0", GenerateName:"calico-apiserver-74555f585c-", Namespace:"calico-apiserver", SelfLink:"", UID:"977684f5-8c03-4eaa-86e9-f712519d6004", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74555f585c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"", Pod:"calico-apiserver-74555f585c-mntkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23fde206963", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:15.892908 containerd[1571]: 2025-07-15 05:21:15.866 [INFO][4885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.136/32] ContainerID="78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-mntkg" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-eth0" Jul 15 05:21:15.892908 containerd[1571]: 2025-07-15 05:21:15.866 [INFO][4885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23fde206963 ContainerID="78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-mntkg" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-eth0" Jul 15 05:21:15.892908 containerd[1571]: 2025-07-15 05:21:15.874 [INFO][4885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-mntkg" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-eth0" Jul 15 05:21:15.892908 containerd[1571]: 2025-07-15 05:21:15.875 [INFO][4885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-mntkg" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-eth0", GenerateName:"calico-apiserver-74555f585c-", Namespace:"calico-apiserver", SelfLink:"", UID:"977684f5-8c03-4eaa-86e9-f712519d6004", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74555f585c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-19", ContainerID:"78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a", Pod:"calico-apiserver-74555f585c-mntkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23fde206963", MAC:"f6:f3:e8:56:11:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:21:15.892908 containerd[1571]: 2025-07-15 05:21:15.888 [INFO][4885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" Namespace="calico-apiserver" Pod="calico-apiserver-74555f585c-mntkg" WorkloadEndpoint="172--237--133--19-k8s-calico--apiserver--74555f585c--mntkg-eth0" Jul 15 05:21:15.913530 containerd[1571]: time="2025-07-15T05:21:15.912736958Z" level=info msg="connecting to shim 78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a" address="unix:///run/containerd/s/16a852a8d477c9e03cf5e8c38c950fc102a1cd52dea0a53a87e0210bf06a403d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:21:15.940540 containerd[1571]: time="2025-07-15T05:21:15.940210530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47\" id:\"60abc64cda0c1df047277ec1f039cc82bdfde6541e2e391b97833ee82d9882e2\" pid:4920 exited_at:{seconds:1752556875 nanos:938767978}" Jul 15 05:21:15.973805 systemd[1]: Started cri-containerd-78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a.scope - libcontainer container 78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a. Jul 15 05:21:15.994057 kubelet[2738]: E0715 05:21:15.993479 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:21:15.995002 kubelet[2738]: E0715 05:21:15.994353 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:21:16.032456 kubelet[2738]: I0715 05:21:16.032331 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5mgz4" podStartSLOduration=35.032317973 podStartE2EDuration="35.032317973s" podCreationTimestamp="2025-07-15 05:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:21:16.016337032 +0000 UTC m=+40.361317648" watchObservedRunningTime="2025-07-15 05:21:16.032317973 +0000 UTC m=+40.377298569" Jul 15 05:21:16.113922 containerd[1571]: time="2025-07-15T05:21:16.113890344Z" level=info msg="TaskExit event in podsandbox handler container_id:\"521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47\" id:\"8133b004ca8f727d1cce5d6ceaeb698f62f004692b4d9ef1814cc141c7855a76\" pid:4990 exited_at:{seconds:1752556876 nanos:113630370}" Jul 15 05:21:16.142915 containerd[1571]: time="2025-07-15T05:21:16.142838410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74555f585c-mntkg,Uid:977684f5-8c03-4eaa-86e9-f712519d6004,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a\"" Jul 15 05:21:16.287421 containerd[1571]: time="2025-07-15T05:21:16.287377946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:16.288173 containerd[1571]: time="2025-07-15T05:21:16.288130147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 15 05:21:16.289564 containerd[1571]: time="2025-07-15T05:21:16.288702083Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:16.290271 containerd[1571]: time="2025-07-15T05:21:16.290235784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:16.290874 containerd[1571]: time="2025-07-15T05:21:16.290838529Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 935.982989ms" Jul 15 05:21:16.290920 containerd[1571]: time="2025-07-15T05:21:16.290872498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 05:21:16.292267 containerd[1571]: time="2025-07-15T05:21:16.292234794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 05:21:16.294376 containerd[1571]: time="2025-07-15T05:21:16.294243134Z" level=info msg="CreateContainer within sandbox \"0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 05:21:16.303729 containerd[1571]: time="2025-07-15T05:21:16.303693118Z" level=info msg="Container 80422b65dacee58b5312fd07492b2e8e0494e319c1e6c307d59678314c9baef8: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:16.310328 containerd[1571]: time="2025-07-15T05:21:16.310290233Z" level=info msg="CreateContainer within sandbox \"0b4c77228c3f82c7c7ab352e3017b11c48469d31789be63e8539edf46c6520bd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"80422b65dacee58b5312fd07492b2e8e0494e319c1e6c307d59678314c9baef8\"" Jul 15 05:21:16.311702 containerd[1571]: time="2025-07-15T05:21:16.311679418Z" level=info msg="StartContainer for \"80422b65dacee58b5312fd07492b2e8e0494e319c1e6c307d59678314c9baef8\"" Jul 15 05:21:16.312935 containerd[1571]: time="2025-07-15T05:21:16.312901768Z" level=info msg="connecting to shim 80422b65dacee58b5312fd07492b2e8e0494e319c1e6c307d59678314c9baef8" address="unix:///run/containerd/s/7ee4a413951c09a5c605cf1832b7dfaa48caefce1fcffc45156c9c6106a2e148" protocol=ttrpc version=3 Jul 15 05:21:16.338845 systemd[1]: Started cri-containerd-80422b65dacee58b5312fd07492b2e8e0494e319c1e6c307d59678314c9baef8.scope - libcontainer container 80422b65dacee58b5312fd07492b2e8e0494e319c1e6c307d59678314c9baef8. Jul 15 05:21:16.380669 systemd-networkd[1452]: calid16cb37a09e: Gained IPv6LL Jul 15 05:21:16.383237 containerd[1571]: time="2025-07-15T05:21:16.383204260Z" level=info msg="StartContainer for \"80422b65dacee58b5312fd07492b2e8e0494e319c1e6c307d59678314c9baef8\" returns successfully" Jul 15 05:21:16.765121 systemd-networkd[1452]: cali7703d22decb: Gained IPv6LL Jul 15 05:21:16.866244 kubelet[2738]: I0715 05:21:16.866193 2738 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 05:21:16.866901 kubelet[2738]: I0715 05:21:16.866542 2738 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 05:21:17.002653 kubelet[2738]: E0715 05:21:17.002609 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:21:17.004715 kubelet[2738]: E0715 05:21:17.004664 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:21:17.148685 systemd-networkd[1452]: cali994d963f684: Gained IPv6LL Jul 15 05:21:17.404832 systemd-networkd[1452]: cali23fde206963: Gained IPv6LL Jul 15 05:21:17.797474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount588803887.mount: Deactivated successfully. Jul 15 05:21:18.002752 kubelet[2738]: E0715 05:21:18.002718 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:21:18.218434 containerd[1571]: time="2025-07-15T05:21:18.218327527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:18.222352 containerd[1571]: time="2025-07-15T05:21:18.222303273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 15 05:21:18.223527 containerd[1571]: time="2025-07-15T05:21:18.223360688Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:18.225325 containerd[1571]: time="2025-07-15T05:21:18.225281313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:18.226001 containerd[1571]: time="2025-07-15T05:21:18.225856889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 1.933572486s" Jul 15 05:21:18.226001 containerd[1571]: time="2025-07-15T05:21:18.225887358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 05:21:18.227055 containerd[1571]: time="2025-07-15T05:21:18.227036891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:21:18.229387 containerd[1571]: time="2025-07-15T05:21:18.229351177Z" level=info msg="CreateContainer within sandbox \"a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 05:21:18.234526 containerd[1571]: time="2025-07-15T05:21:18.234337769Z" level=info msg="Container 0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:18.251800 containerd[1571]: time="2025-07-15T05:21:18.251756848Z" level=info msg="CreateContainer within sandbox \"a3694573023ac63c738a3ace5d2ada53153a21cac5f4dda5e6c055a4fb67af6b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e\"" Jul 15 05:21:18.252403 containerd[1571]: time="2025-07-15T05:21:18.252366874Z" level=info msg="StartContainer for \"0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e\"" Jul 15 05:21:18.253585 containerd[1571]: time="2025-07-15T05:21:18.253530006Z" level=info msg="connecting to shim 0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e" address="unix:///run/containerd/s/6732d1b2651c04ad2ae5bcf6033fea19e0828c50ca88014417f0ad2c659a859a" protocol=ttrpc version=3 Jul 15 05:21:18.278735 systemd[1]: Started cri-containerd-0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e.scope - libcontainer container 0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e. Jul 15 05:21:18.335771 containerd[1571]: time="2025-07-15T05:21:18.335704618Z" level=info msg="StartContainer for \"0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e\" returns successfully" Jul 15 05:21:19.015938 kubelet[2738]: I0715 05:21:19.015803 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zjjqr" podStartSLOduration=22.83941762 podStartE2EDuration="25.015785918s" podCreationTimestamp="2025-07-15 05:20:54 +0000 UTC" firstStartedPulling="2025-07-15 05:21:14.115419377 +0000 UTC m=+38.460399973" lastFinishedPulling="2025-07-15 05:21:16.291787675 +0000 UTC m=+40.636768271" observedRunningTime="2025-07-15 05:21:17.016208185 +0000 UTC m=+41.361188791" watchObservedRunningTime="2025-07-15 05:21:19.015785918 +0000 UTC m=+43.360766514" Jul 15 05:21:19.017134 kubelet[2738]: I0715 05:21:19.016009 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-ktdbq" podStartSLOduration=23.189107347 podStartE2EDuration="26.016004814s" podCreationTimestamp="2025-07-15 05:20:53 +0000 UTC" firstStartedPulling="2025-07-15 05:21:15.399800643 +0000 UTC m=+39.744781239" lastFinishedPulling="2025-07-15 05:21:18.22669811 +0000 UTC m=+42.571678706" observedRunningTime="2025-07-15 05:21:19.015014996 +0000 UTC m=+43.359995592" watchObservedRunningTime="2025-07-15 05:21:19.016004814 +0000 UTC m=+43.360985410" Jul 15 05:21:20.008566 kubelet[2738]: I0715 05:21:20.008198 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:21:20.090957 containerd[1571]: time="2025-07-15T05:21:20.090926116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:20.091362 containerd[1571]: time="2025-07-15T05:21:20.091292697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 15 05:21:20.092038 containerd[1571]: time="2025-07-15T05:21:20.092016001Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:20.093542 containerd[1571]: time="2025-07-15T05:21:20.093483319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:20.094076 containerd[1571]: time="2025-07-15T05:21:20.094042467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.866918178s" Jul 15 05:21:20.094076 containerd[1571]: time="2025-07-15T05:21:20.094074046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:21:20.095826 containerd[1571]: time="2025-07-15T05:21:20.095795097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:21:20.097180 containerd[1571]: time="2025-07-15T05:21:20.096991561Z" level=info msg="CreateContainer within sandbox \"5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:21:20.103476 containerd[1571]: time="2025-07-15T05:21:20.101989759Z" level=info msg="Container 801a83fd3270bcfcd0f0f8d280927f6b0013470abf00693848f5dcfa1efb2abe: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:20.115522 containerd[1571]: time="2025-07-15T05:21:20.115478560Z" level=info msg="CreateContainer within sandbox \"5e0228fb2e9dde4417ea64f7f85088289c9c2699e893ea0f88bf3183665168bb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"801a83fd3270bcfcd0f0f8d280927f6b0013470abf00693848f5dcfa1efb2abe\"" Jul 15 05:21:20.115967 containerd[1571]: time="2025-07-15T05:21:20.115896530Z" level=info msg="StartContainer for \"801a83fd3270bcfcd0f0f8d280927f6b0013470abf00693848f5dcfa1efb2abe\"" Jul 15 05:21:20.117003 containerd[1571]: time="2025-07-15T05:21:20.116971466Z" level=info msg="connecting to shim 801a83fd3270bcfcd0f0f8d280927f6b0013470abf00693848f5dcfa1efb2abe" address="unix:///run/containerd/s/1f6b5aa518d2985cd57f26d6365c43cbf5a7ad517760104710d9b13d4e7f2056" protocol=ttrpc version=3 Jul 15 05:21:20.141617 systemd[1]: Started cri-containerd-801a83fd3270bcfcd0f0f8d280927f6b0013470abf00693848f5dcfa1efb2abe.scope - libcontainer container 801a83fd3270bcfcd0f0f8d280927f6b0013470abf00693848f5dcfa1efb2abe. Jul 15 05:21:20.193095 containerd[1571]: time="2025-07-15T05:21:20.193038603Z" level=info msg="StartContainer for \"801a83fd3270bcfcd0f0f8d280927f6b0013470abf00693848f5dcfa1efb2abe\" returns successfully" Jul 15 05:21:20.260374 containerd[1571]: time="2025-07-15T05:21:20.260287947Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:21:20.261029 containerd[1571]: time="2025-07-15T05:21:20.261003461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 05:21:20.263256 containerd[1571]: time="2025-07-15T05:21:20.263212531Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 167.389404ms" Jul 15 05:21:20.263256 containerd[1571]: time="2025-07-15T05:21:20.263235541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:21:20.266208 containerd[1571]: time="2025-07-15T05:21:20.265833503Z" level=info msg="CreateContainer within sandbox \"78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:21:20.273291 containerd[1571]: time="2025-07-15T05:21:20.273269807Z" level=info msg="Container 8f6125bc2d77a13b66f7435840373c3375ac20bfc4712b4b93cda8e96c05f6d1: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:20.291488 containerd[1571]: time="2025-07-15T05:21:20.291466863Z" level=info msg="CreateContainer within sandbox \"78b2ffe82ac1fdd5a4456ddf01c941bb2d3ed9bc1a2c86b095a07981cf538a2a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8f6125bc2d77a13b66f7435840373c3375ac20bfc4712b4b93cda8e96c05f6d1\"" Jul 15 05:21:20.292780 containerd[1571]: time="2025-07-15T05:21:20.292053390Z" level=info msg="StartContainer for \"8f6125bc2d77a13b66f7435840373c3375ac20bfc4712b4b93cda8e96c05f6d1\"" Jul 15 05:21:20.294554 containerd[1571]: time="2025-07-15T05:21:20.294534864Z" level=info msg="connecting to shim 8f6125bc2d77a13b66f7435840373c3375ac20bfc4712b4b93cda8e96c05f6d1" address="unix:///run/containerd/s/16a852a8d477c9e03cf5e8c38c950fc102a1cd52dea0a53a87e0210bf06a403d" protocol=ttrpc version=3 Jul 15 05:21:20.319603 systemd[1]: Started cri-containerd-8f6125bc2d77a13b66f7435840373c3375ac20bfc4712b4b93cda8e96c05f6d1.scope - libcontainer container 8f6125bc2d77a13b66f7435840373c3375ac20bfc4712b4b93cda8e96c05f6d1. Jul 15 05:21:20.377857 containerd[1571]: time="2025-07-15T05:21:20.377837700Z" level=info msg="StartContainer for \"8f6125bc2d77a13b66f7435840373c3375ac20bfc4712b4b93cda8e96c05f6d1\" returns successfully" Jul 15 05:21:21.032358 kubelet[2738]: I0715 05:21:21.032293 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74555f585c-q8hg6" podStartSLOduration=25.461323745 podStartE2EDuration="30.032279325s" podCreationTimestamp="2025-07-15 05:20:51 +0000 UTC" firstStartedPulling="2025-07-15 05:21:15.524048645 +0000 UTC m=+39.869029241" lastFinishedPulling="2025-07-15 05:21:20.095004225 +0000 UTC m=+44.439984821" observedRunningTime="2025-07-15 05:21:21.03207985 +0000 UTC m=+45.377060446" watchObservedRunningTime="2025-07-15 05:21:21.032279325 +0000 UTC m=+45.377259921" Jul 15 05:21:21.745532 kubelet[2738]: I0715 05:21:21.745227 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:21:21.827295 containerd[1571]: time="2025-07-15T05:21:21.827262376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e\" id:\"3309aa5e46d8161427955e80e5a8879e5231876843568a1882a51a74c6615298\" pid:5308 exit_status:1 exited_at:{seconds:1752556881 nanos:826907934}" Jul 15 05:21:21.905212 containerd[1571]: time="2025-07-15T05:21:21.905173521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e\" id:\"fa7a6e4a5694be70df4375a57959c936d23f110dcfc60ca8b2deda08b139677d\" pid:5333 exit_status:1 exited_at:{seconds:1752556881 nanos:904941677}" Jul 15 05:21:22.024885 kubelet[2738]: I0715 05:21:22.024841 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:21:22.024984 kubelet[2738]: I0715 05:21:22.024895 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:21:22.363961 kubelet[2738]: I0715 05:21:22.363784 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:21:22.364305 kubelet[2738]: E0715 05:21:22.364081 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:21:22.375033 kubelet[2738]: I0715 05:21:22.374984 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74555f585c-mntkg" podStartSLOduration=27.256754586 podStartE2EDuration="31.374973883s" podCreationTimestamp="2025-07-15 05:20:51 +0000 UTC" firstStartedPulling="2025-07-15 05:21:16.145744107 +0000 UTC m=+40.490724703" lastFinishedPulling="2025-07-15 05:21:20.263963404 +0000 UTC m=+44.608944000" observedRunningTime="2025-07-15 05:21:21.055376645 +0000 UTC m=+45.400357241" watchObservedRunningTime="2025-07-15 05:21:22.374973883 +0000 UTC m=+46.719954479" Jul 15 05:21:23.027059 kubelet[2738]: E0715 05:21:23.027031 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:21:23.544685 systemd-networkd[1452]: vxlan.calico: Link UP Jul 15 05:21:23.544696 systemd-networkd[1452]: vxlan.calico: Gained carrier Jul 15 05:21:25.212686 systemd-networkd[1452]: vxlan.calico: Gained IPv6LL Jul 15 05:21:31.583270 containerd[1571]: time="2025-07-15T05:21:31.583215938Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851\" id:\"8a1d389fbd8b621410c24c2f7a91a8678fa9e5c1ca3a3b04bab63cc5bb8131a4\" pid:5513 exited_at:{seconds:1752556891 nanos:582925943}" Jul 15 05:21:33.974109 kubelet[2738]: I0715 05:21:33.973822 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:21:36.691742 kubelet[2738]: I0715 05:21:36.691487 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:21:45.902771 containerd[1571]: time="2025-07-15T05:21:45.902726022Z" level=info msg="TaskExit event in podsandbox handler container_id:\"521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47\" id:\"33e9cabaa3c485799796ea552f6b10678fdf8ac1c2aa41b00435c9c65e3f3ae9\" pid:5560 exited_at:{seconds:1752556905 nanos:901447557}" Jul 15 05:21:50.144683 containerd[1571]: time="2025-07-15T05:21:50.144641429Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e\" id:\"e32686d112f990562a6a6c23e56150ee54890c60ec7e73dadacb173cafc11b44\" pid:5582 exited_at:{seconds:1752556910 nanos:144323041}" Jul 15 05:21:51.834754 containerd[1571]: time="2025-07-15T05:21:51.834697274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e\" id:\"a7aa4fbe5e173653021fa28859a65fecb074fb7e16edeec39e2cd60421357b4a\" pid:5604 exited_at:{seconds:1752556911 nanos:833329107}" Jul 15 05:21:53.749776 kubelet[2738]: E0715 05:21:53.749737 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:22:01.605645 containerd[1571]: time="2025-07-15T05:22:01.605589527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851\" id:\"c6fe1c2330e90d46d8b6aa6d3f6b2e1d7a30cd75d60f3291042915e398015232\" pid:5629 exited_at:{seconds:1752556921 nanos:605131361}" Jul 15 05:22:02.749674 kubelet[2738]: E0715 05:22:02.749635 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:22:04.751532 kubelet[2738]: E0715 05:22:04.750771 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:22:06.749203 kubelet[2738]: E0715 05:22:06.749119 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:22:14.180901 systemd[1]: Started sshd@7-172.237.133.19:22-184.148.36.227:60842.service - OpenSSH per-connection server daemon (184.148.36.227:60842). Jul 15 05:22:14.404876 sshd[5652]: Received disconnect from 184.148.36.227 port 60842:11: Bye Bye [preauth] Jul 15 05:22:14.405739 sshd[5652]: Disconnected from authenticating user root 184.148.36.227 port 60842 [preauth] Jul 15 05:22:14.407945 systemd[1]: sshd@7-172.237.133.19:22-184.148.36.227:60842.service: Deactivated successfully. Jul 15 05:22:14.653345 containerd[1571]: time="2025-07-15T05:22:14.653226540Z" level=info msg="TaskExit event in podsandbox handler container_id:\"521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47\" id:\"0971c39202a071d3be3f27ef4f79c883534807aca3728b0308dd64f1fdc2e734\" pid:5670 exited_at:{seconds:1752556934 nanos:651977017}" Jul 15 05:22:16.000072 containerd[1571]: time="2025-07-15T05:22:15.999833719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47\" id:\"b8f179f43fa2d5595c6e806c5ded84519781b44411f0eb4f88eb5099a0edcf78\" pid:5692 exited_at:{seconds:1752556935 nanos:997995370}" Jul 15 05:22:16.528440 systemd[1]: Started sshd@8-172.237.133.19:22-93.113.63.124:32964.service - OpenSSH per-connection server daemon (93.113.63.124:32964). Jul 15 05:22:17.589439 sshd[5702]: Received disconnect from 93.113.63.124 port 32964:11: Bye Bye [preauth] Jul 15 05:22:17.589439 sshd[5702]: Disconnected from authenticating user root 93.113.63.124 port 32964 [preauth] Jul 15 05:22:17.592011 systemd[1]: sshd@8-172.237.133.19:22-93.113.63.124:32964.service: Deactivated successfully. Jul 15 05:22:21.825222 containerd[1571]: time="2025-07-15T05:22:21.825064590Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e\" id:\"76b1085507aee2b1ae770c4c2dbf52df87e390e1d17ec2ff7954a99fd1052eb2\" pid:5719 exited_at:{seconds:1752556941 nanos:824407134}" Jul 15 05:22:25.750042 kubelet[2738]: E0715 05:22:25.749710 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:22:31.592750 containerd[1571]: time="2025-07-15T05:22:31.592702619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851\" id:\"b501a518a5b8ff85b87c6d04ec547f68f61a372eee6307889590065c99eeb7c3\" pid:5742 exited_at:{seconds:1752556951 nanos:592465650}" Jul 15 05:22:35.750883 kubelet[2738]: E0715 05:22:35.750170 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:22:37.751456 kubelet[2738]: E0715 05:22:37.750063 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:22:45.890937 containerd[1571]: time="2025-07-15T05:22:45.890897450Z" level=info msg="TaskExit event in podsandbox handler container_id:\"521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47\" id:\"9c5e27d48fe68f95962fe71fbfb0db5c40ee44c6a4a129822b2eb2a1d03b2d1c\" pid:5783 exited_at:{seconds:1752556965 nanos:890670711}" Jul 15 05:22:46.249141 systemd[1]: Started sshd@9-172.237.133.19:22-139.178.68.195:57568.service - OpenSSH per-connection server daemon (139.178.68.195:57568). Jul 15 05:22:46.596631 sshd[5793]: Accepted publickey for core from 139.178.68.195 port 57568 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:22:46.598360 sshd-session[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:22:46.603482 systemd-logind[1541]: New session 8 of user core. Jul 15 05:22:46.612634 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 05:22:46.913912 sshd[5796]: Connection closed by 139.178.68.195 port 57568 Jul 15 05:22:46.914353 sshd-session[5793]: pam_unix(sshd:session): session closed for user core Jul 15 05:22:46.919408 systemd[1]: sshd@9-172.237.133.19:22-139.178.68.195:57568.service: Deactivated successfully. Jul 15 05:22:46.921924 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 05:22:46.922862 systemd-logind[1541]: Session 8 logged out. Waiting for processes to exit. Jul 15 05:22:46.925291 systemd-logind[1541]: Removed session 8. Jul 15 05:22:50.147347 containerd[1571]: time="2025-07-15T05:22:50.147304621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e\" id:\"847fd5045a93a92463fff788ca8d052e6d15ceed20a67fba8e062e44a353374e\" pid:5821 exited_at:{seconds:1752556970 nanos:146923883}" Jul 15 05:22:51.840342 containerd[1571]: time="2025-07-15T05:22:51.840295926Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e\" id:\"f072d1caf1f14c358cf561044c2329856806a058f3cc766be6dd3b5aacea8626\" pid:5843 exited_at:{seconds:1752556971 nanos:839923908}" Jul 15 05:22:51.978741 systemd[1]: Started sshd@10-172.237.133.19:22-139.178.68.195:50434.service - OpenSSH per-connection server daemon (139.178.68.195:50434). Jul 15 05:22:52.335772 sshd[5854]: Accepted publickey for core from 139.178.68.195 port 50434 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:22:52.338702 sshd-session[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:22:52.344657 systemd-logind[1541]: New session 9 of user core. Jul 15 05:22:52.350612 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 05:22:52.656025 sshd[5857]: Connection closed by 139.178.68.195 port 50434 Jul 15 05:22:52.655035 sshd-session[5854]: pam_unix(sshd:session): session closed for user core Jul 15 05:22:52.662253 systemd-logind[1541]: Session 9 logged out. Waiting for processes to exit. Jul 15 05:22:52.663027 systemd[1]: sshd@10-172.237.133.19:22-139.178.68.195:50434.service: Deactivated successfully. Jul 15 05:22:52.667070 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 05:22:52.670872 systemd-logind[1541]: Removed session 9. Jul 15 05:22:52.714787 systemd[1]: Started sshd@11-172.237.133.19:22-139.178.68.195:50450.service - OpenSSH per-connection server daemon (139.178.68.195:50450). Jul 15 05:22:53.057208 sshd[5870]: Accepted publickey for core from 139.178.68.195 port 50450 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:22:53.058819 sshd-session[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:22:53.063319 systemd-logind[1541]: New session 10 of user core. Jul 15 05:22:53.069645 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 05:22:53.390037 sshd[5873]: Connection closed by 139.178.68.195 port 50450 Jul 15 05:22:53.390725 sshd-session[5870]: pam_unix(sshd:session): session closed for user core Jul 15 05:22:53.396762 systemd-logind[1541]: Session 10 logged out. Waiting for processes to exit. Jul 15 05:22:53.397319 systemd[1]: sshd@11-172.237.133.19:22-139.178.68.195:50450.service: Deactivated successfully. Jul 15 05:22:53.401304 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 05:22:53.404238 systemd-logind[1541]: Removed session 10. Jul 15 05:22:53.447948 systemd[1]: Started sshd@12-172.237.133.19:22-139.178.68.195:50452.service - OpenSSH per-connection server daemon (139.178.68.195:50452). Jul 15 05:22:53.781521 sshd[5884]: Accepted publickey for core from 139.178.68.195 port 50452 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:22:53.783405 sshd-session[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:22:53.788395 systemd-logind[1541]: New session 11 of user core. Jul 15 05:22:53.794726 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 05:22:54.082484 sshd[5887]: Connection closed by 139.178.68.195 port 50452 Jul 15 05:22:54.083704 sshd-session[5884]: pam_unix(sshd:session): session closed for user core Jul 15 05:22:54.088250 systemd[1]: sshd@12-172.237.133.19:22-139.178.68.195:50452.service: Deactivated successfully. Jul 15 05:22:54.090793 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 05:22:54.092120 systemd-logind[1541]: Session 11 logged out. Waiting for processes to exit. Jul 15 05:22:54.094124 systemd-logind[1541]: Removed session 11. Jul 15 05:22:59.146178 systemd[1]: Started sshd@13-172.237.133.19:22-139.178.68.195:50454.service - OpenSSH per-connection server daemon (139.178.68.195:50454). Jul 15 05:22:59.496050 sshd[5909]: Accepted publickey for core from 139.178.68.195 port 50454 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:22:59.497654 sshd-session[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:22:59.503459 systemd-logind[1541]: New session 12 of user core. Jul 15 05:22:59.509646 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 05:22:59.750926 kubelet[2738]: E0715 05:22:59.749891 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:22:59.803536 sshd[5912]: Connection closed by 139.178.68.195 port 50454 Jul 15 05:22:59.804005 sshd-session[5909]: pam_unix(sshd:session): session closed for user core Jul 15 05:22:59.809279 systemd-logind[1541]: Session 12 logged out. Waiting for processes to exit. Jul 15 05:22:59.809828 systemd[1]: sshd@13-172.237.133.19:22-139.178.68.195:50454.service: Deactivated successfully. Jul 15 05:22:59.812348 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 05:22:59.815242 systemd-logind[1541]: Removed session 12. Jul 15 05:23:01.596281 containerd[1571]: time="2025-07-15T05:23:01.596202194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851\" id:\"3413a27af5a3cc17bdaaa2719d1a91507508803f84e6c9a6e38c2b23589a9cf4\" pid:5950 exit_status:1 exited_at:{seconds:1752556981 nanos:595689456}" Jul 15 05:23:04.862716 systemd[1]: Started sshd@14-172.237.133.19:22-139.178.68.195:57692.service - OpenSSH per-connection server daemon (139.178.68.195:57692). Jul 15 05:23:05.191965 sshd[5963]: Accepted publickey for core from 139.178.68.195 port 57692 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:23:05.193110 sshd-session[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:23:05.197435 systemd-logind[1541]: New session 13 of user core. Jul 15 05:23:05.204607 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 05:23:05.484590 sshd[5966]: Connection closed by 139.178.68.195 port 57692 Jul 15 05:23:05.485374 sshd-session[5963]: pam_unix(sshd:session): session closed for user core Jul 15 05:23:05.489435 systemd-logind[1541]: Session 13 logged out. Waiting for processes to exit. Jul 15 05:23:05.489902 systemd[1]: sshd@14-172.237.133.19:22-139.178.68.195:57692.service: Deactivated successfully. Jul 15 05:23:05.492009 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 05:23:05.493982 systemd-logind[1541]: Removed session 13. Jul 15 05:23:10.546701 systemd[1]: Started sshd@15-172.237.133.19:22-139.178.68.195:33516.service - OpenSSH per-connection server daemon (139.178.68.195:33516). Jul 15 05:23:10.880592 sshd[5978]: Accepted publickey for core from 139.178.68.195 port 33516 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:23:10.884703 sshd-session[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:23:10.894543 systemd-logind[1541]: New session 14 of user core. Jul 15 05:23:10.897002 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 05:23:11.206636 sshd[5981]: Connection closed by 139.178.68.195 port 33516 Jul 15 05:23:11.206026 sshd-session[5978]: pam_unix(sshd:session): session closed for user core Jul 15 05:23:11.210517 systemd-logind[1541]: Session 14 logged out. Waiting for processes to exit. Jul 15 05:23:11.212055 systemd[1]: sshd@15-172.237.133.19:22-139.178.68.195:33516.service: Deactivated successfully. Jul 15 05:23:11.217321 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 05:23:11.222525 systemd-logind[1541]: Removed session 14. Jul 15 05:23:11.267692 systemd[1]: Started sshd@16-172.237.133.19:22-139.178.68.195:33532.service - OpenSSH per-connection server daemon (139.178.68.195:33532). Jul 15 05:23:11.597864 sshd[5993]: Accepted publickey for core from 139.178.68.195 port 33532 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:23:11.599443 sshd-session[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:23:11.606066 systemd-logind[1541]: New session 15 of user core. Jul 15 05:23:11.610648 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 05:23:12.053455 sshd[5996]: Connection closed by 139.178.68.195 port 33532 Jul 15 05:23:12.055695 sshd-session[5993]: pam_unix(sshd:session): session closed for user core Jul 15 05:23:12.062350 systemd[1]: sshd@16-172.237.133.19:22-139.178.68.195:33532.service: Deactivated successfully. Jul 15 05:23:12.064918 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 05:23:12.068984 systemd-logind[1541]: Session 15 logged out. Waiting for processes to exit. Jul 15 05:23:12.071884 systemd-logind[1541]: Removed session 15. Jul 15 05:23:12.122721 systemd[1]: Started sshd@17-172.237.133.19:22-139.178.68.195:33534.service - OpenSSH per-connection server daemon (139.178.68.195:33534). Jul 15 05:23:12.491042 sshd[6008]: Accepted publickey for core from 139.178.68.195 port 33534 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:23:12.492070 sshd-session[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:23:12.502295 systemd-logind[1541]: New session 16 of user core. Jul 15 05:23:12.508745 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 05:23:12.750140 kubelet[2738]: E0715 05:23:12.750012 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:23:14.525574 sshd[6011]: Connection closed by 139.178.68.195 port 33534 Jul 15 05:23:14.526893 sshd-session[6008]: pam_unix(sshd:session): session closed for user core Jul 15 05:23:14.533009 systemd[1]: sshd@17-172.237.133.19:22-139.178.68.195:33534.service: Deactivated successfully. Jul 15 05:23:14.533443 systemd-logind[1541]: Session 16 logged out. Waiting for processes to exit. Jul 15 05:23:14.537348 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 05:23:14.540039 systemd[1]: session-16.scope: Consumed 477ms CPU time, 80.3M memory peak. Jul 15 05:23:14.542857 systemd-logind[1541]: Removed session 16. Jul 15 05:23:14.585311 systemd[1]: Started sshd@18-172.237.133.19:22-139.178.68.195:33550.service - OpenSSH per-connection server daemon (139.178.68.195:33550). Jul 15 05:23:14.640182 containerd[1571]: time="2025-07-15T05:23:14.640140607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47\" id:\"99407974a07b2c84dec88801e51cd1ed9b446ba65bfd6cf704cb13722c767393\" pid:6042 exited_at:{seconds:1752556994 nanos:639800758}" Jul 15 05:23:14.923854 sshd[6028]: Accepted publickey for core from 139.178.68.195 port 33550 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:23:14.925452 sshd-session[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:23:14.930704 systemd-logind[1541]: New session 17 of user core. Jul 15 05:23:14.935627 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 05:23:15.327943 sshd[6052]: Connection closed by 139.178.68.195 port 33550 Jul 15 05:23:15.328710 sshd-session[6028]: pam_unix(sshd:session): session closed for user core Jul 15 05:23:15.334136 systemd-logind[1541]: Session 17 logged out. Waiting for processes to exit. Jul 15 05:23:15.334457 systemd[1]: sshd@18-172.237.133.19:22-139.178.68.195:33550.service: Deactivated successfully. Jul 15 05:23:15.337218 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 05:23:15.340012 systemd-logind[1541]: Removed session 17. Jul 15 05:23:15.397052 systemd[1]: Started sshd@19-172.237.133.19:22-139.178.68.195:33556.service - OpenSSH per-connection server daemon (139.178.68.195:33556). Jul 15 05:23:15.756222 sshd[6062]: Accepted publickey for core from 139.178.68.195 port 33556 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:23:15.757253 sshd-session[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:23:15.762771 systemd-logind[1541]: New session 18 of user core. Jul 15 05:23:15.767671 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 05:23:15.906161 containerd[1571]: time="2025-07-15T05:23:15.906115717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"521e4337c7e1f16304221b91da1a14518c762151cb6643499e7a91cf2219bf47\" id:\"73e8db5a5d9a3e1f6cfb3401b5164e42e6432804b93399a3d29f50427e72f325\" pid:6078 exited_at:{seconds:1752556995 nanos:905818367}" Jul 15 05:23:16.052964 sshd[6065]: Connection closed by 139.178.68.195 port 33556 Jul 15 05:23:16.053705 sshd-session[6062]: pam_unix(sshd:session): session closed for user core Jul 15 05:23:16.061613 systemd[1]: sshd@19-172.237.133.19:22-139.178.68.195:33556.service: Deactivated successfully. Jul 15 05:23:16.065403 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 05:23:16.066866 systemd-logind[1541]: Session 18 logged out. Waiting for processes to exit. Jul 15 05:23:16.068721 systemd-logind[1541]: Removed session 18. Jul 15 05:23:17.750282 kubelet[2738]: E0715 05:23:17.749579 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:23:19.752909 kubelet[2738]: E0715 05:23:19.751835 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jul 15 05:23:21.112670 systemd[1]: Started sshd@20-172.237.133.19:22-139.178.68.195:60878.service - OpenSSH per-connection server daemon (139.178.68.195:60878). Jul 15 05:23:21.458179 sshd[6101]: Accepted publickey for core from 139.178.68.195 port 60878 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:23:21.459888 sshd-session[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:23:21.464786 systemd-logind[1541]: New session 19 of user core. Jul 15 05:23:21.472056 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 05:23:21.831755 sshd[6104]: Connection closed by 139.178.68.195 port 60878 Jul 15 05:23:21.833736 sshd-session[6101]: pam_unix(sshd:session): session closed for user core Jul 15 05:23:21.839751 systemd-logind[1541]: Session 19 logged out. Waiting for processes to exit. Jul 15 05:23:21.843059 systemd[1]: sshd@20-172.237.133.19:22-139.178.68.195:60878.service: Deactivated successfully. Jul 15 05:23:21.848404 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 05:23:21.855672 systemd-logind[1541]: Removed session 19. Jul 15 05:23:21.901767 containerd[1571]: time="2025-07-15T05:23:21.901711672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e331f9cc6b31602462a03af2133dd0352b7ec7a91047c35695d8b851b26a11e\" id:\"d404b6ce8f5cc333ceec036ee4ad2fb98371f1c02c64351ee5f4954961aa0a16\" pid:6124 exited_at:{seconds:1752557001 nanos:901105049}" Jul 15 05:23:26.891056 systemd[1]: Started sshd@21-172.237.133.19:22-139.178.68.195:60882.service - OpenSSH per-connection server daemon (139.178.68.195:60882). Jul 15 05:23:27.219432 sshd[6139]: Accepted publickey for core from 139.178.68.195 port 60882 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:23:27.220959 sshd-session[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:23:27.225487 systemd-logind[1541]: New session 20 of user core. Jul 15 05:23:27.233622 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 05:23:27.514898 sshd[6142]: Connection closed by 139.178.68.195 port 60882 Jul 15 05:23:27.515690 sshd-session[6139]: pam_unix(sshd:session): session closed for user core Jul 15 05:23:27.520033 systemd-logind[1541]: Session 20 logged out. Waiting for processes to exit. Jul 15 05:23:27.520646 systemd[1]: sshd@21-172.237.133.19:22-139.178.68.195:60882.service: Deactivated successfully. Jul 15 05:23:27.523204 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 05:23:27.525966 systemd-logind[1541]: Removed session 20. Jul 15 05:23:31.585347 containerd[1571]: time="2025-07-15T05:23:31.585256827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1869a1df87ca4cdb9c8c8212536bf64024e2d6ec3aed4103d3aff30db01fe851\" id:\"795926829ec9c6ec4fb18fd718247d46dcfc1fa0e9389fd0ca66c80837ec576f\" pid:6166 exited_at:{seconds:1752557011 nanos:584751866}" Jul 15 05:23:32.580452 systemd[1]: Started sshd@22-172.237.133.19:22-139.178.68.195:37350.service - OpenSSH per-connection server daemon (139.178.68.195:37350). Jul 15 05:23:32.930629 sshd[6180]: Accepted publickey for core from 139.178.68.195 port 37350 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:23:32.931969 sshd-session[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:23:32.937188 systemd-logind[1541]: New session 21 of user core. Jul 15 05:23:32.942630 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 05:23:33.241747 sshd[6183]: Connection closed by 139.178.68.195 port 37350 Jul 15 05:23:33.242356 sshd-session[6180]: pam_unix(sshd:session): session closed for user core Jul 15 05:23:33.247615 systemd[1]: sshd@22-172.237.133.19:22-139.178.68.195:37350.service: Deactivated successfully. Jul 15 05:23:33.250141 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 05:23:33.251177 systemd-logind[1541]: Session 21 logged out. Waiting for processes to exit. Jul 15 05:23:33.253099 systemd-logind[1541]: Removed session 21.