Jan 30 13:50:52.920707 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:50:52.920727 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:50:52.920739 kernel: BIOS-provided physical RAM map: Jan 30 13:50:52.920745 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:50:52.920751 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:50:52.920758 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:50:52.920765 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 30 13:50:52.920772 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 30 13:50:52.920778 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:50:52.920786 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 13:50:52.920793 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:50:52.920799 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:50:52.920805 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 13:50:52.920811 kernel: NX (Execute Disable) protection: active Jan 30 13:50:52.920819 kernel: APIC: Static calls initialized Jan 30 13:50:52.920828 kernel: SMBIOS 2.8 present. Jan 30 13:50:52.920835 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 30 13:50:52.920842 kernel: Hypervisor detected: KVM Jan 30 13:50:52.920849 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:50:52.920855 kernel: kvm-clock: using sched offset of 2151195820 cycles Jan 30 13:50:52.920862 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:50:52.920870 kernel: tsc: Detected 2794.748 MHz processor Jan 30 13:50:52.920877 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:50:52.920884 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:50:52.920891 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 30 13:50:52.920900 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:50:52.920907 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:50:52.920914 kernel: Using GB pages for direct mapping Jan 30 13:50:52.920921 kernel: ACPI: Early table checksum verification disabled Jan 30 13:50:52.920928 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 30 13:50:52.920935 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:50:52.920942 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:50:52.920949 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:50:52.920958 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 30 13:50:52.920965 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:50:52.920972 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:50:52.920979 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:50:52.920986 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:50:52.920993 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 30 13:50:52.921000 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 30 13:50:52.921010 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 30 13:50:52.921020 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 30 13:50:52.921027 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 30 13:50:52.921034 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 30 13:50:52.921041 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 30 13:50:52.921048 kernel: No NUMA configuration found Jan 30 13:50:52.921055 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 30 13:50:52.921062 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 30 13:50:52.921072 kernel: Zone ranges: Jan 30 13:50:52.921079 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:50:52.921086 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 30 13:50:52.921093 kernel: Normal empty Jan 30 13:50:52.921106 kernel: Movable zone start for each node Jan 30 13:50:52.921114 kernel: Early memory node ranges Jan 30 13:50:52.921121 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:50:52.921128 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 30 13:50:52.921135 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 30 13:50:52.921144 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:50:52.921151 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:50:52.921159 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 30 13:50:52.921166 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:50:52.921173 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:50:52.921180 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:50:52.921187 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:50:52.921210 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:50:52.921226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:50:52.921236 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:50:52.921243 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:50:52.921250 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:50:52.921257 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:50:52.921264 kernel: TSC deadline timer available Jan 30 13:50:52.921271 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:50:52.921283 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:50:52.921290 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:50:52.921297 kernel: kvm-guest: setup PV sched yield Jan 30 13:50:52.921307 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 13:50:52.921314 kernel: Booting paravirtualized kernel on KVM Jan 30 13:50:52.921322 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:50:52.921329 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:50:52.921336 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:50:52.921343 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:50:52.921350 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:50:52.921357 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:50:52.921364 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:50:52.921373 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:50:52.921383 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:50:52.921390 kernel: random: crng init done Jan 30 13:50:52.921397 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:50:52.921404 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:50:52.921411 kernel: Fallback order for Node 0: 0 Jan 30 13:50:52.921419 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 30 13:50:52.921426 kernel: Policy zone: DMA32 Jan 30 13:50:52.921433 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:50:52.921443 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 30 13:50:52.921450 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:50:52.921457 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:50:52.921464 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:50:52.921471 kernel: Dynamic Preempt: voluntary Jan 30 13:50:52.921479 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:50:52.921487 kernel: rcu: RCU event tracing is enabled. Jan 30 13:50:52.921494 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:50:52.921501 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:50:52.921511 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:50:52.921518 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:50:52.921525 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:50:52.921533 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:50:52.921540 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:50:52.921547 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:50:52.921554 kernel: Console: colour VGA+ 80x25 Jan 30 13:50:52.921561 kernel: printk: console [ttyS0] enabled Jan 30 13:50:52.921568 kernel: ACPI: Core revision 20230628 Jan 30 13:50:52.921578 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:50:52.921585 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:50:52.921592 kernel: x2apic enabled Jan 30 13:50:52.921599 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:50:52.921607 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:50:52.921625 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:50:52.921633 kernel: kvm-guest: setup PV IPIs Jan 30 13:50:52.921651 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:50:52.921659 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:50:52.921666 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 30 13:50:52.921674 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:50:52.921681 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:50:52.921691 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:50:52.921699 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:50:52.921706 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:50:52.921714 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:50:52.921724 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:50:52.921732 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:50:52.921739 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:50:52.921747 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:50:52.921754 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:50:52.921762 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:50:52.921770 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:50:52.921778 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:50:52.921786 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:50:52.921796 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:50:52.921803 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:50:52.921811 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:50:52.921818 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:50:52.921826 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:50:52.921834 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:50:52.921841 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:50:52.921849 kernel: landlock: Up and running. Jan 30 13:50:52.921856 kernel: SELinux: Initializing. Jan 30 13:50:52.921867 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:50:52.921874 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:50:52.921882 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:50:52.921889 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:50:52.921897 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:50:52.921905 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:50:52.921912 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:50:52.921920 kernel: ... version: 0 Jan 30 13:50:52.921929 kernel: ... bit width: 48 Jan 30 13:50:52.921937 kernel: ... generic registers: 6 Jan 30 13:50:52.921944 kernel: ... value mask: 0000ffffffffffff Jan 30 13:50:52.921952 kernel: ... max period: 00007fffffffffff Jan 30 13:50:52.921959 kernel: ... fixed-purpose events: 0 Jan 30 13:50:52.921967 kernel: ... event mask: 000000000000003f Jan 30 13:50:52.921974 kernel: signal: max sigframe size: 1776 Jan 30 13:50:52.921982 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:50:52.921989 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:50:52.921997 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:50:52.922007 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:50:52.922015 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:50:52.922022 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:50:52.922030 kernel: smpboot: Max logical packages: 1 Jan 30 13:50:52.922037 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 30 13:50:52.922045 kernel: devtmpfs: initialized Jan 30 13:50:52.922052 kernel: x86/mm: Memory block size: 128MB Jan 30 13:50:52.922060 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:50:52.922068 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:50:52.922077 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:50:52.922085 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:50:52.922093 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:50:52.922106 kernel: audit: type=2000 audit(1738245052.328:1): state=initialized audit_enabled=0 res=1 Jan 30 13:50:52.922114 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:50:52.922121 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:50:52.922129 kernel: cpuidle: using governor menu Jan 30 13:50:52.922137 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:50:52.922144 kernel: dca service started, version 1.12.1 Jan 30 13:50:52.922154 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:50:52.922162 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:50:52.922170 kernel: PCI: Using configuration type 1 for base access Jan 30 13:50:52.922177 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:50:52.922185 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:50:52.922192 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:50:52.922200 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:50:52.922208 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:50:52.922216 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:50:52.922225 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:50:52.922233 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:50:52.922241 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:50:52.922248 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:50:52.922256 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:50:52.922263 kernel: ACPI: Interpreter enabled Jan 30 13:50:52.922271 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:50:52.922278 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:50:52.922286 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:50:52.922296 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:50:52.922303 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:50:52.922311 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:50:52.922494 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:50:52.922639 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:50:52.922764 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:50:52.922774 kernel: PCI host bridge to bus 0000:00 Jan 30 13:50:52.922914 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:50:52.923028 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:50:52.923150 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:50:52.923261 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:50:52.923370 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:50:52.923481 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 30 13:50:52.923592 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:50:52.923776 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:50:52.923909 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:50:52.924033 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 30 13:50:52.924161 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 30 13:50:52.924282 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 30 13:50:52.924403 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:50:52.924538 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:50:52.924683 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 30 13:50:52.924805 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 30 13:50:52.924926 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 30 13:50:52.925060 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:50:52.925193 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:50:52.925319 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 30 13:50:52.925564 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 30 13:50:52.925710 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:50:52.925834 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 30 13:50:52.925955 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 30 13:50:52.926076 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 30 13:50:52.926206 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 30 13:50:52.926334 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:50:52.926464 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:50:52.926592 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:50:52.926756 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 30 13:50:52.926875 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 30 13:50:52.927001 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:50:52.927129 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 13:50:52.927139 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:50:52.927151 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:50:52.927159 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:50:52.927167 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:50:52.927174 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:50:52.927182 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:50:52.927190 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:50:52.927197 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:50:52.927205 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:50:52.927213 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:50:52.927223 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:50:52.927230 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:50:52.927238 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:50:52.927246 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:50:52.927253 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:50:52.927261 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:50:52.927268 kernel: iommu: Default domain type: Translated Jan 30 13:50:52.927276 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:50:52.927284 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:50:52.927294 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:50:52.927302 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:50:52.927309 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 30 13:50:52.927429 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:50:52.927548 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:50:52.927679 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:50:52.927689 kernel: vgaarb: loaded Jan 30 13:50:52.927697 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:50:52.927710 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:50:52.927717 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:50:52.927725 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:50:52.927733 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:50:52.927741 kernel: pnp: PnP ACPI init Jan 30 13:50:52.927870 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:50:52.927881 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:50:52.927889 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:50:52.927900 kernel: NET: Registered PF_INET protocol family Jan 30 13:50:52.927908 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:50:52.927916 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:50:52.927923 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:50:52.927931 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:50:52.927938 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:50:52.927946 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:50:52.927954 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:50:52.927962 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:50:52.927971 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:50:52.927979 kernel: NET: Registered PF_XDP protocol family Jan 30 13:50:52.928090 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:50:52.928208 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:50:52.928323 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:50:52.928432 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:50:52.928542 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:50:52.928727 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 30 13:50:52.928743 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:50:52.928751 kernel: Initialise system trusted keyrings Jan 30 13:50:52.928759 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:50:52.928767 kernel: Key type asymmetric registered Jan 30 13:50:52.928774 kernel: Asymmetric key parser 'x509' registered Jan 30 13:50:52.928781 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:50:52.928789 kernel: io scheduler mq-deadline registered Jan 30 13:50:52.928796 kernel: io scheduler kyber registered Jan 30 13:50:52.928804 kernel: io scheduler bfq registered Jan 30 13:50:52.928814 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:50:52.928822 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:50:52.928830 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:50:52.928837 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:50:52.928845 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:50:52.928853 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:50:52.928860 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:50:52.928868 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:50:52.928876 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:50:52.928886 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:50:52.929016 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:50:52.929140 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:50:52.929254 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:50:52 UTC (1738245052) Jan 30 13:50:52.929366 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:50:52.929376 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:50:52.929384 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:50:52.929391 kernel: Segment Routing with IPv6 Jan 30 13:50:52.929402 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:50:52.929410 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:50:52.929417 kernel: Key type dns_resolver registered Jan 30 13:50:52.929425 kernel: IPI shorthand broadcast: enabled Jan 30 13:50:52.929433 kernel: sched_clock: Marking stable (563002776, 104345467)->(717751891, -50403648) Jan 30 13:50:52.929440 kernel: registered taskstats version 1 Jan 30 13:50:52.929448 kernel: Loading compiled-in X.509 certificates Jan 30 13:50:52.929455 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:50:52.929463 kernel: Key type .fscrypt registered Jan 30 13:50:52.929473 kernel: Key type fscrypt-provisioning registered Jan 30 13:50:52.929481 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:50:52.929488 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:50:52.929496 kernel: ima: No architecture policies found Jan 30 13:50:52.929503 kernel: clk: Disabling unused clocks Jan 30 13:50:52.929511 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:50:52.929518 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:50:52.929526 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:50:52.929533 kernel: Run /init as init process Jan 30 13:50:52.929543 kernel: with arguments: Jan 30 13:50:52.929551 kernel: /init Jan 30 13:50:52.929558 kernel: with environment: Jan 30 13:50:52.929565 kernel: HOME=/ Jan 30 13:50:52.929573 kernel: TERM=linux Jan 30 13:50:52.929580 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:50:52.929590 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:50:52.929600 systemd[1]: Detected virtualization kvm. Jan 30 13:50:52.929611 systemd[1]: Detected architecture x86-64. Jan 30 13:50:52.929631 systemd[1]: Running in initrd. Jan 30 13:50:52.929639 systemd[1]: No hostname configured, using default hostname. Jan 30 13:50:52.929646 systemd[1]: Hostname set to . Jan 30 13:50:52.929655 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:50:52.929663 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:50:52.929671 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:50:52.929679 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:50:52.929691 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:50:52.929710 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:50:52.929720 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:50:52.929729 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:50:52.929739 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:50:52.929750 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:50:52.929758 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:50:52.929767 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:50:52.929775 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:50:52.929783 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:50:52.929792 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:50:52.929800 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:50:52.929808 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:50:52.929819 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:50:52.929827 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:50:52.929836 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:50:52.929844 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:50:52.929855 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:50:52.929863 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:50:52.929871 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:50:52.929880 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:50:52.929890 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:50:52.929899 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:50:52.929907 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:50:52.929915 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:50:52.929924 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:50:52.929932 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:52.929940 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:50:52.929948 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:50:52.929957 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:50:52.929968 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:50:52.929994 systemd-journald[192]: Collecting audit messages is disabled. Jan 30 13:50:52.930016 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:50:52.930024 systemd-journald[192]: Journal started Jan 30 13:50:52.930044 systemd-journald[192]: Runtime Journal (/run/log/journal/acb9122c1ff045339fa779873a00ee0e) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:50:52.919201 systemd-modules-load[194]: Inserted module 'overlay' Jan 30 13:50:52.969628 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:50:52.969644 kernel: Bridge firewalling registered Jan 30 13:50:52.969655 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:50:52.945666 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 30 13:50:52.969876 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:50:52.971827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:52.982915 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:50:52.983915 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:50:52.985010 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:50:52.989787 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:50:52.998222 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:50:53.001314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:53.004030 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:50:53.006686 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:50:53.019821 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:50:53.022523 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:50:53.030744 dracut-cmdline[229]: dracut-dracut-053 Jan 30 13:50:53.033480 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:50:53.060242 systemd-resolved[232]: Positive Trust Anchors: Jan 30 13:50:53.060258 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:50:53.060302 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:50:53.062899 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 30 13:50:53.063920 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:50:53.069411 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:50:53.123647 kernel: SCSI subsystem initialized Jan 30 13:50:53.132642 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:50:53.143642 kernel: iscsi: registered transport (tcp) Jan 30 13:50:53.164655 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:50:53.164693 kernel: QLogic iSCSI HBA Driver Jan 30 13:50:53.208514 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:50:53.215863 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:50:53.241849 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:50:53.241939 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:50:53.241950 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:50:53.284662 kernel: raid6: avx2x4 gen() 30577 MB/s Jan 30 13:50:53.301651 kernel: raid6: avx2x2 gen() 31233 MB/s Jan 30 13:50:53.318715 kernel: raid6: avx2x1 gen() 26072 MB/s Jan 30 13:50:53.318745 kernel: raid6: using algorithm avx2x2 gen() 31233 MB/s Jan 30 13:50:53.336735 kernel: raid6: .... xor() 19967 MB/s, rmw enabled Jan 30 13:50:53.336789 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:50:53.356636 kernel: xor: automatically using best checksumming function avx Jan 30 13:50:53.503647 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:50:53.515610 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:50:53.522733 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:50:53.534286 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 30 13:50:53.538829 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:50:53.545735 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:50:53.558142 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Jan 30 13:50:53.588713 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:50:53.602740 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:50:53.662638 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:50:53.673787 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:50:53.689352 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:50:53.692394 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:50:53.695195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:50:53.698061 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:50:53.702714 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:50:53.720748 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:50:53.721529 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:50:53.721548 kernel: GPT:9289727 != 19775487 Jan 30 13:50:53.721562 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:50:53.721577 kernel: GPT:9289727 != 19775487 Jan 30 13:50:53.721590 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:50:53.721610 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:50:53.721640 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:50:53.711736 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:50:53.721181 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:50:53.734223 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:50:53.734250 kernel: AES CTR mode by8 optimization enabled Jan 30 13:50:53.735646 kernel: libata version 3.00 loaded. Jan 30 13:50:53.745202 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:50:53.780172 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:50:53.780197 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:50:53.780867 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:50:53.781049 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (476) Jan 30 13:50:53.781065 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (474) Jan 30 13:50:53.781092 kernel: scsi host0: ahci Jan 30 13:50:53.781303 kernel: scsi host1: ahci Jan 30 13:50:53.781487 kernel: scsi host2: ahci Jan 30 13:50:53.784729 kernel: scsi host3: ahci Jan 30 13:50:53.784915 kernel: scsi host4: ahci Jan 30 13:50:53.785110 kernel: scsi host5: ahci Jan 30 13:50:53.785295 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 30 13:50:53.785316 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 30 13:50:53.785331 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 30 13:50:53.785345 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 30 13:50:53.785359 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 30 13:50:53.785373 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 30 13:50:53.749963 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:50:53.750327 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:53.753558 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:50:53.755201 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:50:53.756050 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:53.760808 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:53.768935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:53.791343 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:50:53.822942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:53.834107 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:50:53.839177 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:50:53.844158 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:50:53.845387 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:50:53.862757 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:50:53.865985 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:50:53.874350 disk-uuid[566]: Primary Header is updated. Jan 30 13:50:53.874350 disk-uuid[566]: Secondary Entries is updated. Jan 30 13:50:53.874350 disk-uuid[566]: Secondary Header is updated. Jan 30 13:50:53.880141 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:50:53.884629 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:50:53.892251 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:54.089959 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:50:54.090041 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:54.090075 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:54.091642 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:54.091730 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:50:54.092665 kernel: ata3.00: applying bridge limits Jan 30 13:50:54.093641 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:54.093660 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:54.094684 kernel: ata3.00: configured for UDMA/100 Jan 30 13:50:54.095650 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:50:54.134169 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:50:54.146291 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:50:54.146305 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:50:54.885260 disk-uuid[569]: The operation has completed successfully. Jan 30 13:50:54.887175 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:50:54.914227 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:50:54.914351 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:50:54.942888 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:50:54.946037 sh[593]: Success Jan 30 13:50:54.957640 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:50:54.991140 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:50:55.007259 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:50:55.011848 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:50:55.024628 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:50:55.024694 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:50:55.024709 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:50:55.024723 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:50:55.025349 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:50:55.030102 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:50:55.031337 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:50:55.032299 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:50:55.035861 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:50:55.048961 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:50:55.049016 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:50:55.049027 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:50:55.051658 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:50:55.060751 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:50:55.062675 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:50:55.072016 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:50:55.079792 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:50:55.134537 ignition[695]: Ignition 2.19.0 Jan 30 13:50:55.134550 ignition[695]: Stage: fetch-offline Jan 30 13:50:55.134584 ignition[695]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:55.134593 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:50:55.134706 ignition[695]: parsed url from cmdline: "" Jan 30 13:50:55.134710 ignition[695]: no config URL provided Jan 30 13:50:55.134715 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:50:55.134724 ignition[695]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:50:55.134749 ignition[695]: op(1): [started] loading QEMU firmware config module Jan 30 13:50:55.134754 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:50:55.141825 ignition[695]: op(1): [finished] loading QEMU firmware config module Jan 30 13:50:55.166675 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:50:55.186769 ignition[695]: parsing config with SHA512: 873658fc8e7b0ec5c83f6f266ab869ee36b402cd59fdbcfff7abc831ff232f0504207112b87ba5e7939fda847f5c1f945b2d2c8646640dc72a0acef9ac4df45d Jan 30 13:50:55.187844 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:50:55.191217 unknown[695]: fetched base config from "system" Jan 30 13:50:55.192377 unknown[695]: fetched user config from "qemu" Jan 30 13:50:55.192787 ignition[695]: fetch-offline: fetch-offline passed Jan 30 13:50:55.192850 ignition[695]: Ignition finished successfully Jan 30 13:50:55.197300 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:50:55.214877 systemd-networkd[782]: lo: Link UP Jan 30 13:50:55.214888 systemd-networkd[782]: lo: Gained carrier Jan 30 13:50:55.216752 systemd-networkd[782]: Enumeration completed Jan 30 13:50:55.217279 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:50:55.217283 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:50:55.218416 systemd-networkd[782]: eth0: Link UP Jan 30 13:50:55.218420 systemd-networkd[782]: eth0: Gained carrier Jan 30 13:50:55.218427 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:50:55.220443 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:50:55.227206 systemd[1]: Reached target network.target - Network. Jan 30 13:50:55.229065 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:50:55.239811 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:50:55.246808 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:50:55.254372 ignition[785]: Ignition 2.19.0 Jan 30 13:50:55.254383 ignition[785]: Stage: kargs Jan 30 13:50:55.254562 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:55.254575 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:50:55.255551 ignition[785]: kargs: kargs passed Jan 30 13:50:55.255598 ignition[785]: Ignition finished successfully Jan 30 13:50:55.262273 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:50:55.275788 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:50:55.291193 ignition[794]: Ignition 2.19.0 Jan 30 13:50:55.291205 ignition[794]: Stage: disks Jan 30 13:50:55.291394 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:55.291407 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:50:55.292386 ignition[794]: disks: disks passed Jan 30 13:50:55.292434 ignition[794]: Ignition finished successfully Jan 30 13:50:55.298484 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:50:55.299121 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:50:55.300988 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:50:55.302974 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:50:55.305252 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:50:55.305567 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:50:55.323756 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:50:55.333918 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.119 Jan 30 13:50:55.333934 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jan 30 13:50:55.336650 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:50:55.341266 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:50:55.349728 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:50:55.429635 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:50:55.429937 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:50:55.432089 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:50:55.447696 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:50:55.450211 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:50:55.452724 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:50:55.458254 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Jan 30 13:50:55.458279 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:50:55.458293 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:50:55.458314 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:50:55.452767 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:50:55.462514 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:50:55.457573 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:50:55.465129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:50:55.467089 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:50:55.482750 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:50:55.511489 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:50:55.516297 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:50:55.519688 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:50:55.523787 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:50:55.594627 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:50:55.608707 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:50:55.612139 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:50:55.616637 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:50:55.634453 ignition[925]: INFO : Ignition 2.19.0 Jan 30 13:50:55.636342 ignition[925]: INFO : Stage: mount Jan 30 13:50:55.636342 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:55.636342 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:50:55.636342 ignition[925]: INFO : mount: mount passed Jan 30 13:50:55.636342 ignition[925]: INFO : Ignition finished successfully Jan 30 13:50:55.638139 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:50:55.643444 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:50:55.645678 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:50:56.022877 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:50:56.040767 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:50:56.046640 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Jan 30 13:50:56.048770 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:50:56.048792 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:50:56.048803 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:50:56.051637 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:50:56.053024 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:50:56.070716 ignition[955]: INFO : Ignition 2.19.0 Jan 30 13:50:56.070716 ignition[955]: INFO : Stage: files Jan 30 13:50:56.072428 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:56.072428 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:50:56.074969 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:50:56.076213 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:50:56.076213 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:50:56.079498 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:50:56.080925 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:50:56.082300 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:50:56.081288 unknown[955]: wrote ssh authorized keys file for user: core Jan 30 13:50:56.084947 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:50:56.084947 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:50:56.111240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:50:56.198851 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:50:56.198851 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:50:56.203137 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:50:56.203137 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:50:56.203137 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:50:56.203137 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:50:56.203137 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:50:56.203137 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:50:56.203137 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:50:56.203137 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:50:56.203137 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:50:56.203137 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:50:56.203137 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:50:56.203137 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:50:56.203137 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:50:56.498765 systemd-networkd[782]: eth0: Gained IPv6LL Jan 30 13:50:56.746442 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:50:57.074040 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:50:57.074040 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:50:57.078064 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:50:57.078064 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:50:57.078064 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:50:57.078064 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 13:50:57.078064 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:50:57.078064 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:50:57.078064 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 13:50:57.078064 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:50:57.100755 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:50:57.105047 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:50:57.106889 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:50:57.106889 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:50:57.106889 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:50:57.106889 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:50:57.106889 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:50:57.106889 ignition[955]: INFO : files: files passed Jan 30 13:50:57.106889 ignition[955]: INFO : Ignition finished successfully Jan 30 13:50:57.108151 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:50:57.115806 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:50:57.118456 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:50:57.120057 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:50:57.120192 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:50:57.128150 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:50:57.130853 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:50:57.130853 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:50:57.135668 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:50:57.133280 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:50:57.135857 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:50:57.149762 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:50:57.174771 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:50:57.174901 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:50:57.177554 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:50:57.179602 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:50:57.181675 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:50:57.182463 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:50:57.200277 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:50:57.211763 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:50:57.221415 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:50:57.222785 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:50:57.225253 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:50:57.227542 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:50:57.227694 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:50:57.230130 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:50:57.232176 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:50:57.234638 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:50:57.237100 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:50:57.239545 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:50:57.242215 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:50:57.244690 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:50:57.247428 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:50:57.249869 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:50:57.252391 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:50:57.254355 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:50:57.254491 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:50:57.256949 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:50:57.258660 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:50:57.260950 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:50:57.261114 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:50:57.263395 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:50:57.263536 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:50:57.265998 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:50:57.266144 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:50:57.268473 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:50:57.270505 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:50:57.273672 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:50:57.275694 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:50:57.277880 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:50:57.280191 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:50:57.280307 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:50:57.282474 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:50:57.282587 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:50:57.285043 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:50:57.285188 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:50:57.287864 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:50:57.288008 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:50:57.297782 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:50:57.299629 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:50:57.299775 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:50:57.302938 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:50:57.305074 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:50:57.305389 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:50:57.311364 ignition[1009]: INFO : Ignition 2.19.0 Jan 30 13:50:57.311364 ignition[1009]: INFO : Stage: umount Jan 30 13:50:57.307736 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:50:57.314845 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:57.314845 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:50:57.314845 ignition[1009]: INFO : umount: umount passed Jan 30 13:50:57.314845 ignition[1009]: INFO : Ignition finished successfully Jan 30 13:50:57.308021 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:50:57.315034 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:50:57.315152 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:50:57.317925 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:50:57.318040 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:50:57.321963 systemd[1]: Stopped target network.target - Network. Jan 30 13:50:57.323274 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:50:57.323329 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:50:57.325798 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:50:57.325846 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:50:57.327109 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:50:57.327157 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:50:57.329211 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:50:57.329260 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:50:57.332222 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:50:57.334669 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:50:57.337685 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:50:57.340658 systemd-networkd[782]: eth0: DHCPv6 lease lost Jan 30 13:50:57.341818 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:50:57.341956 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:50:57.344725 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:50:57.344885 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:50:57.346969 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:50:57.347037 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:50:57.381812 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:50:57.383857 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:50:57.384906 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:50:57.387663 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:50:57.387719 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:50:57.390688 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:50:57.390742 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:50:57.393816 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:50:57.393868 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:50:57.397722 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:50:57.409593 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:50:57.410711 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:50:57.419564 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:50:57.420705 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:50:57.423496 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:50:57.423556 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:50:57.426816 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:50:57.426859 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:50:57.429877 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:50:57.429932 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:50:57.433025 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:50:57.433079 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:50:57.435999 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:50:57.436051 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:57.451746 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:50:57.453983 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:50:57.454039 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:50:57.457553 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:50:57.457603 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:57.461032 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:50:57.462165 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:50:57.545378 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:50:57.546453 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:50:57.548924 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:50:57.551000 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:50:57.551064 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:50:57.567759 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:50:57.576373 systemd[1]: Switching root. Jan 30 13:50:57.605896 systemd-journald[192]: Journal stopped Jan 30 13:50:58.823415 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 30 13:50:58.823498 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:50:58.823521 kernel: SELinux: policy capability open_perms=1 Jan 30 13:50:58.823543 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:50:58.823568 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:50:58.823584 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:50:58.823600 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:50:58.823634 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:50:58.823651 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:50:58.823667 kernel: audit: type=1403 audit(1738245058.041:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:50:58.823692 systemd[1]: Successfully loaded SELinux policy in 39.239ms. Jan 30 13:50:58.823724 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.221ms. Jan 30 13:50:58.823744 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:50:58.823765 systemd[1]: Detected virtualization kvm. Jan 30 13:50:58.823783 systemd[1]: Detected architecture x86-64. Jan 30 13:50:58.823800 systemd[1]: Detected first boot. Jan 30 13:50:58.823817 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:50:58.823833 zram_generator::config[1053]: No configuration found. Jan 30 13:50:58.823850 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:50:58.823866 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:50:58.823882 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:50:58.823902 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:50:58.823921 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:50:58.823947 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:50:58.823964 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:50:58.823981 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:50:58.823997 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:50:58.824014 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:50:58.824031 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:50:58.824052 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:50:58.824070 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:50:58.824087 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:50:58.824105 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:50:58.824122 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:50:58.824140 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:50:58.824158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:50:58.824176 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:50:58.824194 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:50:58.824213 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:50:58.824229 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:50:58.824245 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:50:58.824261 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:50:58.824279 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:50:58.824295 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:50:58.824311 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:50:58.824328 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:50:58.824349 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:50:58.824368 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:50:58.824387 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:50:58.824406 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:50:58.824423 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:50:58.824440 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:50:58.824457 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:50:58.824474 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:50:58.824491 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:50:58.824511 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:50:58.824529 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:50:58.824547 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:50:58.824563 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:50:58.824580 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:50:58.824597 systemd[1]: Reached target machines.target - Containers. Jan 30 13:50:58.824634 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:50:58.824652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:50:58.824672 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:50:58.824689 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:50:58.824707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:50:58.824725 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:50:58.824742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:50:58.824761 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:50:58.824778 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:50:58.824795 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:50:58.824813 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:50:58.824834 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:50:58.824851 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:50:58.824868 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:50:58.824885 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:50:58.824900 kernel: loop: module loaded Jan 30 13:50:58.824915 kernel: fuse: init (API version 7.39) Jan 30 13:50:58.824931 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:50:58.824960 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:50:58.824976 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:50:58.824996 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:50:58.825014 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:50:58.825031 systemd[1]: Stopped verity-setup.service. Jan 30 13:50:58.825049 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:50:58.825073 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:50:58.825089 kernel: ACPI: bus type drm_connector registered Jan 30 13:50:58.825106 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:50:58.825124 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:50:58.825141 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:50:58.825162 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:50:58.825179 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:50:58.825197 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:50:58.825214 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:50:58.825236 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:50:58.825253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:50:58.825269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:50:58.825285 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:50:58.825301 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:50:58.825317 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:50:58.825334 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:50:58.825350 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:50:58.825366 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:50:58.825386 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:50:58.825426 systemd-journald[1116]: Collecting audit messages is disabled. Jan 30 13:50:58.825462 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:50:58.825480 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:50:58.825504 systemd-journald[1116]: Journal started Jan 30 13:50:58.825533 systemd-journald[1116]: Runtime Journal (/run/log/journal/acb9122c1ff045339fa779873a00ee0e) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:50:58.531977 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:50:58.552958 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:50:58.553376 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:50:58.827843 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:50:58.828723 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:50:58.830277 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:50:58.844727 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:50:58.855811 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:50:58.858708 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:50:58.859934 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:50:58.859992 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:50:58.862415 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:50:58.865152 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:50:58.867741 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:50:58.869048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:50:58.872484 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:50:58.876430 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:50:58.878044 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:50:58.884742 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:50:58.886292 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:50:58.889810 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:50:58.893308 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:50:58.897439 systemd-journald[1116]: Time spent on flushing to /var/log/journal/acb9122c1ff045339fa779873a00ee0e is 12.852ms for 950 entries. Jan 30 13:50:58.897439 systemd-journald[1116]: System Journal (/var/log/journal/acb9122c1ff045339fa779873a00ee0e) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:50:59.195339 systemd-journald[1116]: Received client request to flush runtime journal. Jan 30 13:50:59.195420 kernel: loop0: detected capacity change from 0 to 205544 Jan 30 13:50:59.195453 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:50:59.195475 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 13:50:59.195500 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 13:50:58.898758 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:50:58.901839 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:50:58.903562 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:50:58.915398 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:50:59.197655 kernel: loop3: detected capacity change from 0 to 205544 Jan 30 13:50:58.918510 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:50:58.933352 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:50:58.947951 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:50:59.169846 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:50:59.172264 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:50:59.174455 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:50:59.187811 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:50:59.190974 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:50:59.203578 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:50:59.216045 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:50:59.217938 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:50:59.222646 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 13:50:59.228822 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:50:59.231638 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 13:50:59.236865 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:50:59.240306 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:50:59.241762 (sd-merge)[1183]: Merged extensions into '/usr'. Jan 30 13:50:59.245796 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:50:59.245811 systemd[1]: Reloading... Jan 30 13:50:59.261258 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 30 13:50:59.261275 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 30 13:50:59.305705 zram_generator::config[1216]: No configuration found. Jan 30 13:50:59.389181 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:50:59.428366 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:50:59.480116 systemd[1]: Reloading finished in 233 ms. Jan 30 13:50:59.516488 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:50:59.518143 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:50:59.519907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:50:59.540797 systemd[1]: Starting ensure-sysext.service... Jan 30 13:50:59.542821 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:50:59.551523 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:50:59.551543 systemd[1]: Reloading... Jan 30 13:50:59.570865 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:50:59.571264 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:50:59.572489 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:50:59.572895 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 30 13:50:59.573010 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 30 13:50:59.579959 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:50:59.579976 systemd-tmpfiles[1256]: Skipping /boot Jan 30 13:50:59.594400 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:50:59.594513 systemd-tmpfiles[1256]: Skipping /boot Jan 30 13:50:59.606702 zram_generator::config[1289]: No configuration found. Jan 30 13:50:59.757348 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:50:59.811105 systemd[1]: Reloading finished in 259 ms. Jan 30 13:50:59.830434 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:50:59.832324 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:50:59.852029 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:50:59.854571 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:50:59.859865 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:50:59.864155 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:50:59.867782 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:50:59.870498 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:50:59.877824 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:50:59.878219 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:50:59.879552 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:50:59.886106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:50:59.888779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:50:59.890252 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:50:59.895141 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:50:59.896222 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:50:59.897243 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:50:59.899253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:50:59.899419 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:50:59.901161 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:50:59.901317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:50:59.903583 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:50:59.903822 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:50:59.905885 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Jan 30 13:50:59.915920 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:50:59.916140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:50:59.922859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:50:59.923767 augenrules[1351]: No rules Jan 30 13:50:59.925843 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:50:59.928853 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:50:59.930288 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:50:59.932429 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:50:59.934811 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:50:59.935788 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:50:59.938215 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:50:59.940292 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:50:59.942310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:50:59.942537 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:50:59.945214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:50:59.945444 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:50:59.948515 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:50:59.948750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:50:59.953001 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:50:59.971046 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:50:59.973005 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:50:59.981989 systemd[1]: Finished ensure-sysext.service. Jan 30 13:50:59.986227 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:50:59.986383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:50:59.996810 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:51:00.001751 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:51:00.009369 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1368) Jan 30 13:51:00.005987 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:51:00.014092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:51:00.018064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:00.026916 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:51:00.032783 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:51:00.034130 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:51:00.034163 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:00.034893 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:51:00.035138 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:51:00.037031 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:51:00.037249 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:51:00.039000 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:51:00.039212 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:51:00.041264 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:51:00.041485 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:51:00.051979 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:51:00.061353 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:51:00.061553 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:51:00.080034 systemd-resolved[1326]: Positive Trust Anchors: Jan 30 13:51:00.080058 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:51:00.080100 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:51:00.085148 systemd-resolved[1326]: Defaulting to hostname 'linux'. Jan 30 13:51:00.087302 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:51:00.089050 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:51:00.092658 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:51:00.094921 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:51:00.102648 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:51:00.103831 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:51:00.122608 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:51:00.129637 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:51:00.131389 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:51:00.131570 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:51:00.132934 systemd-networkd[1401]: lo: Link UP Jan 30 13:51:00.132941 systemd-networkd[1401]: lo: Gained carrier Jan 30 13:51:00.135797 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:51:00.136758 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:51:00.136314 systemd-networkd[1401]: Enumeration completed Jan 30 13:51:00.137107 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:00.137122 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:51:00.138642 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:51:00.138711 systemd-networkd[1401]: eth0: Link UP Jan 30 13:51:00.138717 systemd-networkd[1401]: eth0: Gained carrier Jan 30 13:51:00.138733 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:00.140057 systemd[1]: Reached target network.target - Network. Jan 30 13:51:00.141036 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:51:00.149800 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:51:00.151662 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:51:00.152713 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jan 30 13:51:00.579842 systemd-resolved[1326]: Clock change detected. Flushing caches. Jan 30 13:51:00.579913 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:51:00.580114 systemd-timesyncd[1402]: Initial clock synchronization to Thu 2025-01-30 13:51:00.579805 UTC. Jan 30 13:51:00.614922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:00.662898 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:51:00.674322 kernel: kvm_amd: TSC scaling supported Jan 30 13:51:00.674351 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:51:00.674364 kernel: kvm_amd: Nested Paging enabled Jan 30 13:51:00.674376 kernel: kvm_amd: LBR virtualization supported Jan 30 13:51:00.674951 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:51:00.676122 kernel: kvm_amd: Virtual GIF supported Jan 30 13:51:00.696790 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:51:00.737555 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:51:00.749002 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:51:00.785669 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:00.795258 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:51:00.830676 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:51:00.832359 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:51:00.833553 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:51:00.834817 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:51:00.836163 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:51:00.837973 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:51:00.839271 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:51:00.840589 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:51:00.841908 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:51:00.841943 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:51:00.842973 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:51:00.844645 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:51:00.847688 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:51:00.859325 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:51:00.861594 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:51:00.863159 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:51:00.864324 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:51:00.865299 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:51:00.866276 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:51:00.866301 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:51:00.867362 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:51:00.869404 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:51:00.871149 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:51:00.873925 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:51:00.878037 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:51:00.879108 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:51:00.880921 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:51:00.885869 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:51:00.887454 jq[1434]: false Jan 30 13:51:00.889155 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:51:00.892053 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:51:00.897356 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:51:00.899218 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:51:00.899618 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:51:00.900932 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:51:00.903419 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:51:00.906828 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:51:00.908944 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:51:00.909393 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:51:00.911206 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:51:00.911684 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:51:00.913345 extend-filesystems[1435]: Found loop3 Jan 30 13:51:00.914453 extend-filesystems[1435]: Found loop4 Jan 30 13:51:00.915272 jq[1443]: true Jan 30 13:51:00.916356 extend-filesystems[1435]: Found loop5 Jan 30 13:51:00.916356 extend-filesystems[1435]: Found sr0 Jan 30 13:51:00.916356 extend-filesystems[1435]: Found vda Jan 30 13:51:00.916356 extend-filesystems[1435]: Found vda1 Jan 30 13:51:00.916356 extend-filesystems[1435]: Found vda2 Jan 30 13:51:00.916356 extend-filesystems[1435]: Found vda3 Jan 30 13:51:00.916356 extend-filesystems[1435]: Found usr Jan 30 13:51:00.916356 extend-filesystems[1435]: Found vda4 Jan 30 13:51:00.916356 extend-filesystems[1435]: Found vda6 Jan 30 13:51:00.916356 extend-filesystems[1435]: Found vda7 Jan 30 13:51:00.916356 extend-filesystems[1435]: Found vda9 Jan 30 13:51:00.916356 extend-filesystems[1435]: Checking size of /dev/vda9 Jan 30 13:51:00.927921 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:51:00.925460 dbus-daemon[1433]: [system] SELinux support is enabled Jan 30 13:51:00.942800 update_engine[1442]: I20250130 13:51:00.939884 1442 main.cc:92] Flatcar Update Engine starting Jan 30 13:51:00.942800 update_engine[1442]: I20250130 13:51:00.942410 1442 update_check_scheduler.cc:74] Next update check in 10m48s Jan 30 13:51:00.942716 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:51:00.942757 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:51:00.945486 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:51:00.945508 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:51:00.950577 extend-filesystems[1435]: Resized partition /dev/vda9 Jan 30 13:51:00.953587 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:51:00.954178 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:51:00.960651 tar[1446]: linux-amd64/helm Jan 30 13:51:00.958905 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:51:00.960294 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:51:00.961610 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:51:00.963547 jq[1450]: true Jan 30 13:51:00.962135 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:51:00.970339 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1382) Jan 30 13:51:00.970379 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:51:01.008798 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:51:01.028055 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:51:01.035146 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:51:01.035179 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:51:01.038592 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:51:01.038592 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:51:01.038592 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:51:01.037396 systemd-logind[1441]: New seat seat0. Jan 30 13:51:01.047404 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Jan 30 13:51:01.038808 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:51:01.050911 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:51:01.039893 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:51:01.047879 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:51:01.051781 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:51:01.054535 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:51:01.070379 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:51:01.104326 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:51:01.128792 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:51:01.137997 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:51:01.143078 systemd[1]: Started sshd@0-10.0.0.119:22-10.0.0.1:38448.service - OpenSSH per-connection server daemon (10.0.0.1:38448). Jan 30 13:51:01.151330 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:51:01.151683 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:51:01.163262 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:51:01.178750 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:51:01.189765 containerd[1453]: time="2025-01-30T13:51:01.189671928Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:51:01.190270 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:51:01.193050 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:51:01.194573 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:51:01.212606 sshd[1511]: Accepted publickey for core from 10.0.0.1 port 38448 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:01.213573 containerd[1453]: time="2025-01-30T13:51:01.213533166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:01.215267 sshd[1511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:01.216841 containerd[1453]: time="2025-01-30T13:51:01.216703071Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:01.216929 containerd[1453]: time="2025-01-30T13:51:01.216910249Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:51:01.216992 containerd[1453]: time="2025-01-30T13:51:01.216979259Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:51:01.217235 containerd[1453]: time="2025-01-30T13:51:01.217218147Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:51:01.217292 containerd[1453]: time="2025-01-30T13:51:01.217279943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:01.217398 containerd[1453]: time="2025-01-30T13:51:01.217382485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:01.217453 containerd[1453]: time="2025-01-30T13:51:01.217441075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:01.217689 containerd[1453]: time="2025-01-30T13:51:01.217669654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:01.217740 containerd[1453]: time="2025-01-30T13:51:01.217728173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:01.217814 containerd[1453]: time="2025-01-30T13:51:01.217799397Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:01.217857 containerd[1453]: time="2025-01-30T13:51:01.217846455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:01.218117 containerd[1453]: time="2025-01-30T13:51:01.218098097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:01.218474 containerd[1453]: time="2025-01-30T13:51:01.218453333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:01.218664 containerd[1453]: time="2025-01-30T13:51:01.218644822Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:01.218715 containerd[1453]: time="2025-01-30T13:51:01.218703152Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:51:01.218870 containerd[1453]: time="2025-01-30T13:51:01.218855267Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:51:01.218965 containerd[1453]: time="2025-01-30T13:51:01.218951548Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:51:01.224105 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:51:01.226534 containerd[1453]: time="2025-01-30T13:51:01.225738686Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:51:01.226534 containerd[1453]: time="2025-01-30T13:51:01.225813967Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:51:01.226534 containerd[1453]: time="2025-01-30T13:51:01.225865143Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:51:01.226534 containerd[1453]: time="2025-01-30T13:51:01.225886022Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:51:01.226534 containerd[1453]: time="2025-01-30T13:51:01.225906791Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:51:01.226534 containerd[1453]: time="2025-01-30T13:51:01.226084434Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:51:01.226534 containerd[1453]: time="2025-01-30T13:51:01.226370952Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:51:01.226534 containerd[1453]: time="2025-01-30T13:51:01.226500044Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:51:01.227260 containerd[1453]: time="2025-01-30T13:51:01.227203102Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:51:01.227260 containerd[1453]: time="2025-01-30T13:51:01.227255981Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:51:01.227327 containerd[1453]: time="2025-01-30T13:51:01.227274075Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:51:01.227327 containerd[1453]: time="2025-01-30T13:51:01.227291358Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:51:01.227327 containerd[1453]: time="2025-01-30T13:51:01.227305644Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:51:01.227327 containerd[1453]: time="2025-01-30T13:51:01.227320793Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:51:01.227434 containerd[1453]: time="2025-01-30T13:51:01.227336783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:51:01.227434 containerd[1453]: time="2025-01-30T13:51:01.227350378Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:51:01.227434 containerd[1453]: time="2025-01-30T13:51:01.227362281Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:51:01.227434 containerd[1453]: time="2025-01-30T13:51:01.227373973Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:51:01.227434 containerd[1453]: time="2025-01-30T13:51:01.227400452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227434 containerd[1453]: time="2025-01-30T13:51:01.227415270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227434 containerd[1453]: time="2025-01-30T13:51:01.227430979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227615 containerd[1453]: time="2025-01-30T13:51:01.227448001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227615 containerd[1453]: time="2025-01-30T13:51:01.227464392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227615 containerd[1453]: time="2025-01-30T13:51:01.227493908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227615 containerd[1453]: time="2025-01-30T13:51:01.227511370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227615 containerd[1453]: time="2025-01-30T13:51:01.227529073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227615 containerd[1453]: time="2025-01-30T13:51:01.227545244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227615 containerd[1453]: time="2025-01-30T13:51:01.227567165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227615 containerd[1453]: time="2025-01-30T13:51:01.227582614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227615 containerd[1453]: time="2025-01-30T13:51:01.227599295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227615 containerd[1453]: time="2025-01-30T13:51:01.227618591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227888 containerd[1453]: time="2025-01-30T13:51:01.227652986Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:51:01.227888 containerd[1453]: time="2025-01-30T13:51:01.227684836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227888 containerd[1453]: time="2025-01-30T13:51:01.227700014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.227888 containerd[1453]: time="2025-01-30T13:51:01.227714231Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:51:01.227888 containerd[1453]: time="2025-01-30T13:51:01.227800222Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:51:01.227888 containerd[1453]: time="2025-01-30T13:51:01.227827533Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:51:01.227888 containerd[1453]: time="2025-01-30T13:51:01.227842461Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:51:01.227888 containerd[1453]: time="2025-01-30T13:51:01.227862659Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:51:01.227888 containerd[1453]: time="2025-01-30T13:51:01.227873820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.228113 containerd[1453]: time="2025-01-30T13:51:01.227900159Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:51:01.228113 containerd[1453]: time="2025-01-30T13:51:01.227913965Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:51:01.228113 containerd[1453]: time="2025-01-30T13:51:01.227927911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:51:01.228371 containerd[1453]: time="2025-01-30T13:51:01.228299248Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:51:01.228371 containerd[1453]: time="2025-01-30T13:51:01.228359541Z" level=info msg="Connect containerd service" Jan 30 13:51:01.228556 containerd[1453]: time="2025-01-30T13:51:01.228389868Z" level=info msg="using legacy CRI server" Jan 30 13:51:01.228556 containerd[1453]: time="2025-01-30T13:51:01.228396891Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:51:01.228556 containerd[1453]: time="2025-01-30T13:51:01.228506116Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:51:01.229302 containerd[1453]: time="2025-01-30T13:51:01.229263717Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:51:01.229683 containerd[1453]: time="2025-01-30T13:51:01.229482898Z" level=info msg="Start subscribing containerd event" Jan 30 13:51:01.229683 containerd[1453]: time="2025-01-30T13:51:01.229588375Z" level=info msg="Start recovering state" Jan 30 13:51:01.229683 containerd[1453]: time="2025-01-30T13:51:01.229630234Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:51:01.229844 containerd[1453]: time="2025-01-30T13:51:01.229704894Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:51:01.230759 containerd[1453]: time="2025-01-30T13:51:01.230331369Z" level=info msg="Start event monitor" Jan 30 13:51:01.230759 containerd[1453]: time="2025-01-30T13:51:01.230359361Z" level=info msg="Start snapshots syncer" Jan 30 13:51:01.230759 containerd[1453]: time="2025-01-30T13:51:01.230370031Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:51:01.230759 containerd[1453]: time="2025-01-30T13:51:01.230378357Z" level=info msg="Start streaming server" Jan 30 13:51:01.230759 containerd[1453]: time="2025-01-30T13:51:01.230442136Z" level=info msg="containerd successfully booted in 0.041685s" Jan 30 13:51:01.230390 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:51:01.232207 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:51:01.237912 systemd-logind[1441]: New session 1 of user core. Jan 30 13:51:01.245304 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:51:01.257203 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:51:01.261169 (systemd)[1526]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:51:01.383349 tar[1446]: linux-amd64/LICENSE Jan 30 13:51:01.383513 tar[1446]: linux-amd64/README.md Jan 30 13:51:01.394503 systemd[1526]: Queued start job for default target default.target. Jan 30 13:51:01.395821 systemd[1526]: Created slice app.slice - User Application Slice. Jan 30 13:51:01.395843 systemd[1526]: Reached target paths.target - Paths. Jan 30 13:51:01.395857 systemd[1526]: Reached target timers.target - Timers. Jan 30 13:51:01.397301 systemd[1526]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:51:01.398285 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:51:01.410892 systemd[1526]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:51:01.411054 systemd[1526]: Reached target sockets.target - Sockets. Jan 30 13:51:01.411075 systemd[1526]: Reached target basic.target - Basic System. Jan 30 13:51:01.411119 systemd[1526]: Reached target default.target - Main User Target. Jan 30 13:51:01.411154 systemd[1526]: Startup finished in 143ms. Jan 30 13:51:01.411358 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:51:01.421883 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:51:01.483852 systemd[1]: Started sshd@1-10.0.0.119:22-10.0.0.1:38454.service - OpenSSH per-connection server daemon (10.0.0.1:38454). Jan 30 13:51:01.522623 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 38454 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:01.524444 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:01.528494 systemd-logind[1441]: New session 2 of user core. Jan 30 13:51:01.545905 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:51:01.602164 sshd[1540]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:01.612665 systemd[1]: sshd@1-10.0.0.119:22-10.0.0.1:38454.service: Deactivated successfully. Jan 30 13:51:01.614302 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:51:01.615549 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:51:01.616745 systemd[1]: Started sshd@2-10.0.0.119:22-10.0.0.1:38468.service - OpenSSH per-connection server daemon (10.0.0.1:38468). Jan 30 13:51:01.619212 systemd-logind[1441]: Removed session 2. Jan 30 13:51:01.659577 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 38468 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:01.660993 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:01.664470 systemd-logind[1441]: New session 3 of user core. Jan 30 13:51:01.672882 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:51:01.728522 sshd[1547]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:01.732439 systemd[1]: sshd@2-10.0.0.119:22-10.0.0.1:38468.service: Deactivated successfully. Jan 30 13:51:01.734333 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:51:01.735000 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:51:01.735901 systemd-logind[1441]: Removed session 3. Jan 30 13:51:02.043927 systemd-networkd[1401]: eth0: Gained IPv6LL Jan 30 13:51:02.047148 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:51:02.048864 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:51:02.060958 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:51:02.063403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:02.065453 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:51:02.082972 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:51:02.083202 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:51:02.084955 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:51:02.085942 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:51:03.330164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:03.330752 (kubelet)[1575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:51:03.333908 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:51:03.349566 systemd[1]: Startup finished in 697ms (kernel) + 5.342s (initrd) + 4.920s (userspace) = 10.960s. Jan 30 13:51:04.106206 kubelet[1575]: E0130 13:51:04.106090 1575 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:51:04.109524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:51:04.109805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:51:04.110278 systemd[1]: kubelet.service: Consumed 1.226s CPU time. Jan 30 13:51:11.740213 systemd[1]: Started sshd@3-10.0.0.119:22-10.0.0.1:42022.service - OpenSSH per-connection server daemon (10.0.0.1:42022). Jan 30 13:51:11.774891 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 42022 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:11.776353 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:11.780303 systemd-logind[1441]: New session 4 of user core. Jan 30 13:51:11.794871 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:51:11.849104 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:11.866092 systemd[1]: sshd@3-10.0.0.119:22-10.0.0.1:42022.service: Deactivated successfully. Jan 30 13:51:11.867783 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:51:11.869184 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:51:11.880030 systemd[1]: Started sshd@4-10.0.0.119:22-10.0.0.1:42038.service - OpenSSH per-connection server daemon (10.0.0.1:42038). Jan 30 13:51:11.880974 systemd-logind[1441]: Removed session 4. Jan 30 13:51:11.910183 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 42038 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:11.911714 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:11.915562 systemd-logind[1441]: New session 5 of user core. Jan 30 13:51:11.925891 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:51:11.974031 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:11.994292 systemd[1]: sshd@4-10.0.0.119:22-10.0.0.1:42038.service: Deactivated successfully. Jan 30 13:51:11.996043 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:51:11.997408 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:51:11.998690 systemd[1]: Started sshd@5-10.0.0.119:22-10.0.0.1:42042.service - OpenSSH per-connection server daemon (10.0.0.1:42042). Jan 30 13:51:11.999606 systemd-logind[1441]: Removed session 5. Jan 30 13:51:12.032096 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 42042 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:12.033456 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:12.037258 systemd-logind[1441]: New session 6 of user core. Jan 30 13:51:12.050967 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:51:12.105140 sshd[1602]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:12.119677 systemd[1]: sshd@5-10.0.0.119:22-10.0.0.1:42042.service: Deactivated successfully. Jan 30 13:51:12.121400 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:51:12.123093 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:51:12.124315 systemd[1]: Started sshd@6-10.0.0.119:22-10.0.0.1:42046.service - OpenSSH per-connection server daemon (10.0.0.1:42046). Jan 30 13:51:12.125047 systemd-logind[1441]: Removed session 6. Jan 30 13:51:12.158257 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 42046 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:12.159678 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:12.163170 systemd-logind[1441]: New session 7 of user core. Jan 30 13:51:12.174881 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:51:12.407106 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:51:12.407452 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:12.426913 sudo[1612]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:12.428975 sshd[1609]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:12.440602 systemd[1]: sshd@6-10.0.0.119:22-10.0.0.1:42046.service: Deactivated successfully. Jan 30 13:51:12.442369 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:51:12.444114 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:51:12.445580 systemd[1]: Started sshd@7-10.0.0.119:22-10.0.0.1:42050.service - OpenSSH per-connection server daemon (10.0.0.1:42050). Jan 30 13:51:12.446420 systemd-logind[1441]: Removed session 7. Jan 30 13:51:12.480160 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 42050 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:12.481617 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:12.485323 systemd-logind[1441]: New session 8 of user core. Jan 30 13:51:12.498888 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:51:12.553426 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:51:12.553797 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:12.557469 sudo[1621]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:12.564015 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:51:12.564433 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:12.583101 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:51:12.584777 auditctl[1624]: No rules Jan 30 13:51:12.586078 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:51:12.586366 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:51:12.588225 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:51:12.616826 augenrules[1642]: No rules Jan 30 13:51:12.618497 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:51:12.619749 sudo[1620]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:12.621689 sshd[1617]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:12.633950 systemd[1]: sshd@7-10.0.0.119:22-10.0.0.1:42050.service: Deactivated successfully. Jan 30 13:51:12.635859 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:51:12.637312 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:51:12.648101 systemd[1]: Started sshd@8-10.0.0.119:22-10.0.0.1:42054.service - OpenSSH per-connection server daemon (10.0.0.1:42054). Jan 30 13:51:12.649131 systemd-logind[1441]: Removed session 8. Jan 30 13:51:12.678948 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 42054 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:12.680529 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:12.684988 systemd-logind[1441]: New session 9 of user core. Jan 30 13:51:12.695037 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:51:12.749032 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:51:12.749372 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:13.047207 (dockerd)[1673]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:51:13.047291 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:51:13.317232 dockerd[1673]: time="2025-01-30T13:51:13.317102850Z" level=info msg="Starting up" Jan 30 13:51:13.475821 dockerd[1673]: time="2025-01-30T13:51:13.475754438Z" level=info msg="Loading containers: start." Jan 30 13:51:13.594794 kernel: Initializing XFRM netlink socket Jan 30 13:51:13.679327 systemd-networkd[1401]: docker0: Link UP Jan 30 13:51:13.701420 dockerd[1673]: time="2025-01-30T13:51:13.701353617Z" level=info msg="Loading containers: done." Jan 30 13:51:13.716340 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck101517416-merged.mount: Deactivated successfully. Jan 30 13:51:13.729425 dockerd[1673]: time="2025-01-30T13:51:13.729351052Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:51:13.729589 dockerd[1673]: time="2025-01-30T13:51:13.729458303Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:51:13.729589 dockerd[1673]: time="2025-01-30T13:51:13.729583658Z" level=info msg="Daemon has completed initialization" Jan 30 13:51:13.766139 dockerd[1673]: time="2025-01-30T13:51:13.766078078Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:51:13.766308 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:51:14.318459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:51:14.329996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:14.406086 containerd[1453]: time="2025-01-30T13:51:14.406035156Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:51:14.474399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:14.480125 (kubelet)[1828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:51:14.590237 kubelet[1828]: E0130 13:51:14.590082 1828 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:51:14.598030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:51:14.598308 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:51:15.347268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount706019339.mount: Deactivated successfully. Jan 30 13:51:16.188117 containerd[1453]: time="2025-01-30T13:51:16.188058256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:16.188835 containerd[1453]: time="2025-01-30T13:51:16.188788275Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 30 13:51:16.190542 containerd[1453]: time="2025-01-30T13:51:16.190493533Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:16.194484 containerd[1453]: time="2025-01-30T13:51:16.194450664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:16.195423 containerd[1453]: time="2025-01-30T13:51:16.195373625Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.789295387s" Jan 30 13:51:16.195423 containerd[1453]: time="2025-01-30T13:51:16.195409963Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 30 13:51:16.196859 containerd[1453]: time="2025-01-30T13:51:16.196835126Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:51:17.353198 containerd[1453]: time="2025-01-30T13:51:17.353073505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:17.354314 containerd[1453]: time="2025-01-30T13:51:17.354249471Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 30 13:51:17.356566 containerd[1453]: time="2025-01-30T13:51:17.356500703Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:17.361113 containerd[1453]: time="2025-01-30T13:51:17.360963712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:17.362058 containerd[1453]: time="2025-01-30T13:51:17.362001188Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.165117691s" Jan 30 13:51:17.362140 containerd[1453]: time="2025-01-30T13:51:17.362057704Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 30 13:51:17.362744 containerd[1453]: time="2025-01-30T13:51:17.362704457Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:51:19.163931 containerd[1453]: time="2025-01-30T13:51:19.163861800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:19.164681 containerd[1453]: time="2025-01-30T13:51:19.164604773Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 30 13:51:19.165906 containerd[1453]: time="2025-01-30T13:51:19.165872200Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:19.168853 containerd[1453]: time="2025-01-30T13:51:19.168816612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:19.169890 containerd[1453]: time="2025-01-30T13:51:19.169836414Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.807081623s" Jan 30 13:51:19.169890 containerd[1453]: time="2025-01-30T13:51:19.169875257Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 30 13:51:19.170749 containerd[1453]: time="2025-01-30T13:51:19.170711776Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:51:20.396308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456124968.mount: Deactivated successfully. Jan 30 13:51:21.176391 containerd[1453]: time="2025-01-30T13:51:21.176319596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:21.177887 containerd[1453]: time="2025-01-30T13:51:21.177843414Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 30 13:51:21.179651 containerd[1453]: time="2025-01-30T13:51:21.179488940Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:21.182814 containerd[1453]: time="2025-01-30T13:51:21.182764844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:21.183580 containerd[1453]: time="2025-01-30T13:51:21.183529157Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.012783888s" Jan 30 13:51:21.183616 containerd[1453]: time="2025-01-30T13:51:21.183577568Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 13:51:21.184062 containerd[1453]: time="2025-01-30T13:51:21.184039704Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:51:21.705257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011666910.mount: Deactivated successfully. Jan 30 13:51:22.383026 containerd[1453]: time="2025-01-30T13:51:22.382968003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:22.384035 containerd[1453]: time="2025-01-30T13:51:22.384003645Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:51:22.385726 containerd[1453]: time="2025-01-30T13:51:22.385702251Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:22.389032 containerd[1453]: time="2025-01-30T13:51:22.388997110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:22.390260 containerd[1453]: time="2025-01-30T13:51:22.390207730Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.20613278s" Jan 30 13:51:22.390323 containerd[1453]: time="2025-01-30T13:51:22.390264006Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:51:22.390829 containerd[1453]: time="2025-01-30T13:51:22.390804068Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:51:22.875912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount191360606.mount: Deactivated successfully. Jan 30 13:51:22.883297 containerd[1453]: time="2025-01-30T13:51:22.883236375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:22.884199 containerd[1453]: time="2025-01-30T13:51:22.884127426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:51:22.885533 containerd[1453]: time="2025-01-30T13:51:22.885491173Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:22.887956 containerd[1453]: time="2025-01-30T13:51:22.887925038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:22.888632 containerd[1453]: time="2025-01-30T13:51:22.888601586Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 497.69753ms" Jan 30 13:51:22.888690 containerd[1453]: time="2025-01-30T13:51:22.888631953Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:51:22.889221 containerd[1453]: time="2025-01-30T13:51:22.889184299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:51:23.414416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1042606423.mount: Deactivated successfully. Jan 30 13:51:24.818451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:51:24.827078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:24.987753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:24.993126 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:51:25.037789 kubelet[2017]: E0130 13:51:25.037683 2017 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:51:25.041549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:51:25.041926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:51:26.208485 containerd[1453]: time="2025-01-30T13:51:26.208401927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:26.209723 containerd[1453]: time="2025-01-30T13:51:26.209663333Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 30 13:51:26.211420 containerd[1453]: time="2025-01-30T13:51:26.211375704Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:26.214848 containerd[1453]: time="2025-01-30T13:51:26.214802350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:26.216475 containerd[1453]: time="2025-01-30T13:51:26.216425254Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.327199978s" Jan 30 13:51:26.216475 containerd[1453]: time="2025-01-30T13:51:26.216465229Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 30 13:51:28.434563 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:28.447161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:28.474216 systemd[1]: Reloading requested from client PID 2056 ('systemctl') (unit session-9.scope)... Jan 30 13:51:28.474239 systemd[1]: Reloading... Jan 30 13:51:28.545797 zram_generator::config[2095]: No configuration found. Jan 30 13:51:28.731069 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:28.811857 systemd[1]: Reloading finished in 337 ms. Jan 30 13:51:28.859824 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:28.864907 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:51:28.865225 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:28.867361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:29.019696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:29.032375 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:51:29.070535 kubelet[2145]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:29.070535 kubelet[2145]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:51:29.070535 kubelet[2145]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:29.070996 kubelet[2145]: I0130 13:51:29.070588 2145 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:51:29.193351 kubelet[2145]: I0130 13:51:29.193303 2145 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:51:29.193351 kubelet[2145]: I0130 13:51:29.193336 2145 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:51:29.193574 kubelet[2145]: I0130 13:51:29.193553 2145 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:51:29.213211 kubelet[2145]: I0130 13:51:29.213158 2145 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:51:29.213371 kubelet[2145]: E0130 13:51:29.213315 2145 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:29.220096 kubelet[2145]: E0130 13:51:29.220051 2145 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:51:29.220096 kubelet[2145]: I0130 13:51:29.220087 2145 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:51:29.225904 kubelet[2145]: I0130 13:51:29.225870 2145 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:51:29.226007 kubelet[2145]: I0130 13:51:29.225969 2145 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:51:29.226137 kubelet[2145]: I0130 13:51:29.226103 2145 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:51:29.226296 kubelet[2145]: I0130 13:51:29.226130 2145 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:51:29.226374 kubelet[2145]: I0130 13:51:29.226304 2145 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:51:29.226374 kubelet[2145]: I0130 13:51:29.226312 2145 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:51:29.226418 kubelet[2145]: I0130 13:51:29.226413 2145 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:29.227744 kubelet[2145]: I0130 13:51:29.227723 2145 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:51:29.227744 kubelet[2145]: I0130 13:51:29.227742 2145 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:51:29.227831 kubelet[2145]: I0130 13:51:29.227788 2145 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:51:29.227831 kubelet[2145]: I0130 13:51:29.227803 2145 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:51:29.230556 kubelet[2145]: W0130 13:51:29.230398 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Jan 30 13:51:29.230556 kubelet[2145]: E0130 13:51:29.230482 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:29.230717 kubelet[2145]: W0130 13:51:29.230616 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Jan 30 13:51:29.230717 kubelet[2145]: E0130 13:51:29.230659 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:29.232169 kubelet[2145]: I0130 13:51:29.232141 2145 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:51:29.233519 kubelet[2145]: I0130 13:51:29.233492 2145 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:51:29.233980 kubelet[2145]: W0130 13:51:29.233954 2145 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:51:29.234939 kubelet[2145]: I0130 13:51:29.234638 2145 server.go:1269] "Started kubelet" Jan 30 13:51:29.234986 kubelet[2145]: I0130 13:51:29.234945 2145 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:51:29.235032 kubelet[2145]: I0130 13:51:29.234974 2145 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:51:29.236010 kubelet[2145]: I0130 13:51:29.235295 2145 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:51:29.236156 kubelet[2145]: I0130 13:51:29.236130 2145 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:51:29.237189 kubelet[2145]: I0130 13:51:29.236398 2145 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:51:29.237529 kubelet[2145]: I0130 13:51:29.237501 2145 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:51:29.239584 kubelet[2145]: E0130 13:51:29.238217 2145 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:51:29.239584 kubelet[2145]: I0130 13:51:29.239022 2145 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:51:29.241689 kubelet[2145]: E0130 13:51:29.241646 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:29.242118 kubelet[2145]: E0130 13:51:29.238754 2145 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7cb2f465875b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:51:29.234618203 +0000 UTC m=+0.197135350,LastTimestamp:2025-01-30 13:51:29.234618203 +0000 UTC m=+0.197135350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:51:29.242339 kubelet[2145]: E0130 13:51:29.242288 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="200ms" Jan 30 13:51:29.242393 kubelet[2145]: I0130 13:51:29.242370 2145 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:51:29.242461 kubelet[2145]: I0130 13:51:29.242444 2145 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:51:29.242813 kubelet[2145]: I0130 13:51:29.242796 2145 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:51:29.243114 kubelet[2145]: I0130 13:51:29.243081 2145 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:51:29.243562 kubelet[2145]: W0130 13:51:29.242804 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Jan 30 13:51:29.243688 kubelet[2145]: E0130 13:51:29.243649 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:29.245458 kubelet[2145]: I0130 13:51:29.245429 2145 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:51:29.255354 kubelet[2145]: I0130 13:51:29.255312 2145 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:51:29.256971 kubelet[2145]: I0130 13:51:29.256557 2145 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:51:29.256971 kubelet[2145]: I0130 13:51:29.256579 2145 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:51:29.256971 kubelet[2145]: I0130 13:51:29.256600 2145 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:51:29.256971 kubelet[2145]: E0130 13:51:29.256648 2145 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:51:29.258560 kubelet[2145]: W0130 13:51:29.258526 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Jan 30 13:51:29.258621 kubelet[2145]: E0130 13:51:29.258565 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:29.261788 kubelet[2145]: I0130 13:51:29.261748 2145 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:51:29.261788 kubelet[2145]: I0130 13:51:29.261760 2145 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:51:29.261788 kubelet[2145]: I0130 13:51:29.261789 2145 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:29.342442 kubelet[2145]: E0130 13:51:29.342362 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:29.356813 kubelet[2145]: E0130 13:51:29.356730 2145 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:51:29.443197 kubelet[2145]: E0130 13:51:29.443135 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:29.443487 kubelet[2145]: E0130 13:51:29.443430 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="400ms" Jan 30 13:51:29.544251 kubelet[2145]: E0130 13:51:29.544202 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:29.557500 kubelet[2145]: E0130 13:51:29.557403 2145 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:51:29.594706 kubelet[2145]: I0130 13:51:29.594530 2145 policy_none.go:49] "None policy: Start" Jan 30 13:51:29.595491 kubelet[2145]: I0130 13:51:29.595459 2145 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:51:29.595491 kubelet[2145]: I0130 13:51:29.595501 2145 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:51:29.603202 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:51:29.617857 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:51:29.621079 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:51:29.631915 kubelet[2145]: I0130 13:51:29.631862 2145 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:51:29.632120 kubelet[2145]: I0130 13:51:29.632102 2145 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:51:29.632173 kubelet[2145]: I0130 13:51:29.632121 2145 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:51:29.632402 kubelet[2145]: I0130 13:51:29.632332 2145 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:51:29.633230 kubelet[2145]: E0130 13:51:29.633194 2145 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:51:29.733712 kubelet[2145]: I0130 13:51:29.733672 2145 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:51:29.734210 kubelet[2145]: E0130 13:51:29.734162 2145 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Jan 30 13:51:29.844873 kubelet[2145]: E0130 13:51:29.844679 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="800ms" Jan 30 13:51:29.936476 kubelet[2145]: I0130 13:51:29.936433 2145 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:51:29.936968 kubelet[2145]: E0130 13:51:29.936912 2145 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Jan 30 13:51:29.967982 systemd[1]: Created slice kubepods-burstable-podaa277dd7fc7e4e24734af8edd174821e.slice - libcontainer container kubepods-burstable-podaa277dd7fc7e4e24734af8edd174821e.slice. Jan 30 13:51:29.986957 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 30 13:51:30.008327 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 30 13:51:30.046488 kubelet[2145]: I0130 13:51:30.046408 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:30.046488 kubelet[2145]: I0130 13:51:30.046470 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa277dd7fc7e4e24734af8edd174821e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa277dd7fc7e4e24734af8edd174821e\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:30.046488 kubelet[2145]: I0130 13:51:30.046491 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa277dd7fc7e4e24734af8edd174821e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa277dd7fc7e4e24734af8edd174821e\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:30.046488 kubelet[2145]: I0130 13:51:30.046506 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:30.046712 kubelet[2145]: I0130 13:51:30.046557 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:30.046712 kubelet[2145]: I0130 13:51:30.046589 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa277dd7fc7e4e24734af8edd174821e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa277dd7fc7e4e24734af8edd174821e\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:30.046712 kubelet[2145]: I0130 13:51:30.046616 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:30.046712 kubelet[2145]: I0130 13:51:30.046634 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:30.046712 kubelet[2145]: I0130 13:51:30.046656 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:51:30.285054 kubelet[2145]: E0130 13:51:30.284899 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:30.285942 containerd[1453]: time="2025-01-30T13:51:30.285894832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa277dd7fc7e4e24734af8edd174821e,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:30.306333 kubelet[2145]: E0130 13:51:30.306270 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:30.306921 containerd[1453]: time="2025-01-30T13:51:30.306877482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:30.311211 kubelet[2145]: E0130 13:51:30.311172 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:30.311682 containerd[1453]: time="2025-01-30T13:51:30.311635795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:30.338205 kubelet[2145]: I0130 13:51:30.338162 2145 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:51:30.338643 kubelet[2145]: E0130 13:51:30.338605 2145 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Jan 30 13:51:30.489336 kubelet[2145]: W0130 13:51:30.489224 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Jan 30 13:51:30.489336 kubelet[2145]: E0130 13:51:30.489340 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:30.537301 kubelet[2145]: W0130 13:51:30.537167 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Jan 30 13:51:30.537301 kubelet[2145]: E0130 13:51:30.537219 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:30.579662 kubelet[2145]: W0130 13:51:30.579579 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Jan 30 13:51:30.579662 kubelet[2145]: E0130 13:51:30.579667 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:30.645968 kubelet[2145]: E0130 13:51:30.645916 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="1.6s" Jan 30 13:51:30.749514 kubelet[2145]: W0130 13:51:30.749444 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Jan 30 13:51:30.749514 kubelet[2145]: E0130 13:51:30.749498 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:30.935281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4259042729.mount: Deactivated successfully. Jan 30 13:51:30.944970 containerd[1453]: time="2025-01-30T13:51:30.944906344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:30.946285 containerd[1453]: time="2025-01-30T13:51:30.946187236Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:30.947240 containerd[1453]: time="2025-01-30T13:51:30.947182202Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:30.948334 containerd[1453]: time="2025-01-30T13:51:30.948259231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:51:30.949748 containerd[1453]: time="2025-01-30T13:51:30.949702969Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:51:30.951126 containerd[1453]: time="2025-01-30T13:51:30.951096592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:51:30.952391 containerd[1453]: time="2025-01-30T13:51:30.952353750Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:30.956648 containerd[1453]: time="2025-01-30T13:51:30.956590606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:30.957395 containerd[1453]: time="2025-01-30T13:51:30.957351823Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 645.630107ms" Jan 30 13:51:30.961930 containerd[1453]: time="2025-01-30T13:51:30.961854457Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 675.869156ms" Jan 30 13:51:30.963424 containerd[1453]: time="2025-01-30T13:51:30.963366473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 656.399814ms" Jan 30 13:51:31.105980 containerd[1453]: time="2025-01-30T13:51:31.105803974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:31.105980 containerd[1453]: time="2025-01-30T13:51:31.105875969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:31.105980 containerd[1453]: time="2025-01-30T13:51:31.105894884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:31.105980 containerd[1453]: time="2025-01-30T13:51:31.105632563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:31.105980 containerd[1453]: time="2025-01-30T13:51:31.105884104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:31.105980 containerd[1453]: time="2025-01-30T13:51:31.105946672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:31.106516 containerd[1453]: time="2025-01-30T13:51:31.105996956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:31.106558 containerd[1453]: time="2025-01-30T13:51:31.106467478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:31.106599 containerd[1453]: time="2025-01-30T13:51:31.106536598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:31.106599 containerd[1453]: time="2025-01-30T13:51:31.106560012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:31.106713 containerd[1453]: time="2025-01-30T13:51:31.106671321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:31.109593 containerd[1453]: time="2025-01-30T13:51:31.107950419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:31.131811 systemd[1]: Started cri-containerd-3958f6a42eb54bde82053fa65a26ab116a6ac54a68ec28655ab371fb97adecde.scope - libcontainer container 3958f6a42eb54bde82053fa65a26ab116a6ac54a68ec28655ab371fb97adecde. Jan 30 13:51:31.137676 systemd[1]: Started cri-containerd-44d4298cfbfa7533e71514b7e25999c2240ba6483c25f7cfdc516aa0db79a293.scope - libcontainer container 44d4298cfbfa7533e71514b7e25999c2240ba6483c25f7cfdc516aa0db79a293. Jan 30 13:51:31.141852 kubelet[2145]: I0130 13:51:31.141167 2145 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:51:31.141852 kubelet[2145]: E0130 13:51:31.141565 2145 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Jan 30 13:51:31.142119 systemd[1]: Started cri-containerd-e3d2ca806a0209c61b6720f9e464532cb7637296087c27c0b0b3822ffee43959.scope - libcontainer container e3d2ca806a0209c61b6720f9e464532cb7637296087c27c0b0b3822ffee43959. Jan 30 13:51:31.180444 containerd[1453]: time="2025-01-30T13:51:31.180295192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"3958f6a42eb54bde82053fa65a26ab116a6ac54a68ec28655ab371fb97adecde\"" Jan 30 13:51:31.182635 kubelet[2145]: E0130 13:51:31.182594 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:31.188902 containerd[1453]: time="2025-01-30T13:51:31.187884424Z" level=info msg="CreateContainer within sandbox \"3958f6a42eb54bde82053fa65a26ab116a6ac54a68ec28655ab371fb97adecde\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:51:31.195883 containerd[1453]: time="2025-01-30T13:51:31.195823372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa277dd7fc7e4e24734af8edd174821e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3d2ca806a0209c61b6720f9e464532cb7637296087c27c0b0b3822ffee43959\"" Jan 30 13:51:31.196643 kubelet[2145]: E0130 13:51:31.196614 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:31.197633 containerd[1453]: time="2025-01-30T13:51:31.197602770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"44d4298cfbfa7533e71514b7e25999c2240ba6483c25f7cfdc516aa0db79a293\"" Jan 30 13:51:31.198389 kubelet[2145]: E0130 13:51:31.198313 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:31.199195 containerd[1453]: time="2025-01-30T13:51:31.198979772Z" level=info msg="CreateContainer within sandbox \"e3d2ca806a0209c61b6720f9e464532cb7637296087c27c0b0b3822ffee43959\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:51:31.200924 containerd[1453]: time="2025-01-30T13:51:31.200903870Z" level=info msg="CreateContainer within sandbox \"44d4298cfbfa7533e71514b7e25999c2240ba6483c25f7cfdc516aa0db79a293\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:51:31.217187 containerd[1453]: time="2025-01-30T13:51:31.217121103Z" level=info msg="CreateContainer within sandbox \"3958f6a42eb54bde82053fa65a26ab116a6ac54a68ec28655ab371fb97adecde\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7463fa1a9cf8e213d16c2c3235f15372e3f0072a977d4586e21e793d2138a339\"" Jan 30 13:51:31.217864 containerd[1453]: time="2025-01-30T13:51:31.217835152Z" level=info msg="StartContainer for \"7463fa1a9cf8e213d16c2c3235f15372e3f0072a977d4586e21e793d2138a339\"" Jan 30 13:51:31.234978 containerd[1453]: time="2025-01-30T13:51:31.234904634Z" level=info msg="CreateContainer within sandbox \"e3d2ca806a0209c61b6720f9e464532cb7637296087c27c0b0b3822ffee43959\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"79790409afd0fcc49cfe8464b8c56035ba4331f22096017c75da77dfb09fa4e0\"" Jan 30 13:51:31.235706 containerd[1453]: time="2025-01-30T13:51:31.235607792Z" level=info msg="StartContainer for \"79790409afd0fcc49cfe8464b8c56035ba4331f22096017c75da77dfb09fa4e0\"" Jan 30 13:51:31.238271 containerd[1453]: time="2025-01-30T13:51:31.238212748Z" level=info msg="CreateContainer within sandbox \"44d4298cfbfa7533e71514b7e25999c2240ba6483c25f7cfdc516aa0db79a293\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c4973078c38e6786be24cb9fbcd194a6c7fe29655462db2cb661c7a393c20d8e\"" Jan 30 13:51:31.240356 containerd[1453]: time="2025-01-30T13:51:31.239269610Z" level=info msg="StartContainer for \"c4973078c38e6786be24cb9fbcd194a6c7fe29655462db2cb661c7a393c20d8e\"" Jan 30 13:51:31.247996 systemd[1]: Started cri-containerd-7463fa1a9cf8e213d16c2c3235f15372e3f0072a977d4586e21e793d2138a339.scope - libcontainer container 7463fa1a9cf8e213d16c2c3235f15372e3f0072a977d4586e21e793d2138a339. Jan 30 13:51:31.267032 systemd[1]: Started cri-containerd-79790409afd0fcc49cfe8464b8c56035ba4331f22096017c75da77dfb09fa4e0.scope - libcontainer container 79790409afd0fcc49cfe8464b8c56035ba4331f22096017c75da77dfb09fa4e0. Jan 30 13:51:31.270647 systemd[1]: Started cri-containerd-c4973078c38e6786be24cb9fbcd194a6c7fe29655462db2cb661c7a393c20d8e.scope - libcontainer container c4973078c38e6786be24cb9fbcd194a6c7fe29655462db2cb661c7a393c20d8e. Jan 30 13:51:31.296157 kubelet[2145]: E0130 13:51:31.296120 2145 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:31.434449 containerd[1453]: time="2025-01-30T13:51:31.434370183Z" level=info msg="StartContainer for \"7463fa1a9cf8e213d16c2c3235f15372e3f0072a977d4586e21e793d2138a339\" returns successfully" Jan 30 13:51:31.434936 containerd[1453]: time="2025-01-30T13:51:31.434566371Z" level=info msg="StartContainer for \"79790409afd0fcc49cfe8464b8c56035ba4331f22096017c75da77dfb09fa4e0\" returns successfully" Jan 30 13:51:31.434936 containerd[1453]: time="2025-01-30T13:51:31.434601607Z" level=info msg="StartContainer for \"c4973078c38e6786be24cb9fbcd194a6c7fe29655462db2cb661c7a393c20d8e\" returns successfully" Jan 30 13:51:32.279641 kubelet[2145]: E0130 13:51:32.279595 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:32.280791 kubelet[2145]: E0130 13:51:32.280644 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:32.282136 kubelet[2145]: E0130 13:51:32.282121 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:32.456044 kubelet[2145]: E0130 13:51:32.455992 2145 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:51:32.669239 kubelet[2145]: E0130 13:51:32.669022 2145 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f7cb2f465875b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:51:29.234618203 +0000 UTC m=+0.197135350,LastTimestamp:2025-01-30 13:51:29.234618203 +0000 UTC m=+0.197135350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:51:32.722492 kubelet[2145]: E0130 13:51:32.722370 2145 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f7cb2f49c4cce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:51:29.238207694 +0000 UTC m=+0.200724841,LastTimestamp:2025-01-30 13:51:29.238207694 +0000 UTC m=+0.200724841,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:51:32.743054 kubelet[2145]: I0130 13:51:32.742976 2145 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:51:32.750617 kubelet[2145]: I0130 13:51:32.750579 2145 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 13:51:32.750617 kubelet[2145]: E0130 13:51:32.750612 2145 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 30 13:51:32.759133 kubelet[2145]: E0130 13:51:32.759094 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:32.859265 kubelet[2145]: E0130 13:51:32.859219 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:32.960062 kubelet[2145]: E0130 13:51:32.959800 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:33.233525 kubelet[2145]: I0130 13:51:33.233377 2145 apiserver.go:52] "Watching apiserver" Jan 30 13:51:33.243078 kubelet[2145]: I0130 13:51:33.243026 2145 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:51:33.288574 kubelet[2145]: E0130 13:51:33.288532 2145 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 30 13:51:33.288735 kubelet[2145]: E0130 13:51:33.288638 2145 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:33.288735 kubelet[2145]: E0130 13:51:33.288638 2145 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:33.288735 kubelet[2145]: E0130 13:51:33.288716 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:33.288839 kubelet[2145]: E0130 13:51:33.288829 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:33.288894 kubelet[2145]: E0130 13:51:33.288830 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:34.379103 kubelet[2145]: E0130 13:51:34.379069 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:34.458437 kubelet[2145]: E0130 13:51:34.458381 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:34.966422 systemd[1]: Reloading requested from client PID 2426 ('systemctl') (unit session-9.scope)... Jan 30 13:51:34.966439 systemd[1]: Reloading... Jan 30 13:51:35.045815 zram_generator::config[2468]: No configuration found. Jan 30 13:51:35.157504 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:35.250306 systemd[1]: Reloading finished in 283 ms. Jan 30 13:51:35.286505 kubelet[2145]: E0130 13:51:35.286456 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:35.286683 kubelet[2145]: E0130 13:51:35.286650 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:35.308330 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:35.334633 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:51:35.335092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:35.350102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:35.508656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:35.514694 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:51:35.557672 kubelet[2510]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:35.557672 kubelet[2510]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:51:35.557672 kubelet[2510]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:35.558128 kubelet[2510]: I0130 13:51:35.557723 2510 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:51:35.564849 kubelet[2510]: I0130 13:51:35.564802 2510 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:51:35.564849 kubelet[2510]: I0130 13:51:35.564832 2510 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:51:35.565078 kubelet[2510]: I0130 13:51:35.565052 2510 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:51:35.566302 kubelet[2510]: I0130 13:51:35.566278 2510 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:51:35.568032 kubelet[2510]: I0130 13:51:35.567995 2510 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:51:35.571070 kubelet[2510]: E0130 13:51:35.571029 2510 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:51:35.571070 kubelet[2510]: I0130 13:51:35.571062 2510 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:51:35.578356 kubelet[2510]: I0130 13:51:35.576957 2510 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:51:35.578356 kubelet[2510]: I0130 13:51:35.577177 2510 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:51:35.578356 kubelet[2510]: I0130 13:51:35.577365 2510 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:51:35.578356 kubelet[2510]: I0130 13:51:35.577398 2510 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:51:35.578640 kubelet[2510]: I0130 13:51:35.577801 2510 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:51:35.578640 kubelet[2510]: I0130 13:51:35.577815 2510 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:51:35.578640 kubelet[2510]: I0130 13:51:35.577873 2510 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:35.578640 kubelet[2510]: I0130 13:51:35.578032 2510 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:51:35.578640 kubelet[2510]: I0130 13:51:35.578048 2510 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:51:35.578640 kubelet[2510]: I0130 13:51:35.578087 2510 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:51:35.578640 kubelet[2510]: I0130 13:51:35.578106 2510 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:51:35.580303 kubelet[2510]: I0130 13:51:35.580279 2510 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:51:35.581026 kubelet[2510]: I0130 13:51:35.580945 2510 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:51:35.583724 kubelet[2510]: I0130 13:51:35.581707 2510 server.go:1269] "Started kubelet" Jan 30 13:51:35.583724 kubelet[2510]: I0130 13:51:35.583171 2510 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:51:35.583724 kubelet[2510]: I0130 13:51:35.583439 2510 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:51:35.583724 kubelet[2510]: I0130 13:51:35.583575 2510 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:51:35.586202 kubelet[2510]: I0130 13:51:35.584163 2510 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:51:35.586202 kubelet[2510]: I0130 13:51:35.584290 2510 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:51:35.586202 kubelet[2510]: E0130 13:51:35.584737 2510 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:35.586202 kubelet[2510]: I0130 13:51:35.584800 2510 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:51:35.586202 kubelet[2510]: I0130 13:51:35.584949 2510 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:51:35.586202 kubelet[2510]: I0130 13:51:35.585097 2510 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:51:35.588552 kubelet[2510]: I0130 13:51:35.588525 2510 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:51:35.590325 kubelet[2510]: I0130 13:51:35.590303 2510 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:51:35.592835 kubelet[2510]: I0130 13:51:35.590503 2510 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:51:35.592835 kubelet[2510]: I0130 13:51:35.592506 2510 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:51:35.598130 kubelet[2510]: E0130 13:51:35.598097 2510 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:51:35.608199 kubelet[2510]: I0130 13:51:35.608139 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:51:35.609956 kubelet[2510]: I0130 13:51:35.609925 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:51:35.610035 kubelet[2510]: I0130 13:51:35.609976 2510 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:51:35.610035 kubelet[2510]: I0130 13:51:35.610001 2510 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:51:35.610104 kubelet[2510]: E0130 13:51:35.610054 2510 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:51:35.630019 kubelet[2510]: I0130 13:51:35.629981 2510 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:51:35.630019 kubelet[2510]: I0130 13:51:35.630004 2510 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:51:35.630019 kubelet[2510]: I0130 13:51:35.630022 2510 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:35.630241 kubelet[2510]: I0130 13:51:35.630161 2510 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:51:35.630241 kubelet[2510]: I0130 13:51:35.630172 2510 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:51:35.630241 kubelet[2510]: I0130 13:51:35.630190 2510 policy_none.go:49] "None policy: Start" Jan 30 13:51:35.630871 kubelet[2510]: I0130 13:51:35.630839 2510 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:51:35.630871 kubelet[2510]: I0130 13:51:35.630860 2510 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:51:35.631015 kubelet[2510]: I0130 13:51:35.630985 2510 state_mem.go:75] "Updated machine memory state" Jan 30 13:51:35.636034 kubelet[2510]: I0130 13:51:35.636001 2510 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:51:35.636364 kubelet[2510]: I0130 13:51:35.636197 2510 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:51:35.636364 kubelet[2510]: I0130 13:51:35.636213 2510 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:51:35.636440 kubelet[2510]: I0130 13:51:35.636371 2510 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:51:35.718244 kubelet[2510]: E0130 13:51:35.718105 2510 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:35.718485 kubelet[2510]: E0130 13:51:35.718456 2510 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 13:51:35.741135 kubelet[2510]: I0130 13:51:35.741098 2510 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:51:35.747474 kubelet[2510]: I0130 13:51:35.747450 2510 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 30 13:51:35.747603 kubelet[2510]: I0130 13:51:35.747529 2510 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 13:51:35.786229 kubelet[2510]: I0130 13:51:35.786088 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:35.786229 kubelet[2510]: I0130 13:51:35.786120 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:51:35.786229 kubelet[2510]: I0130 13:51:35.786135 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa277dd7fc7e4e24734af8edd174821e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa277dd7fc7e4e24734af8edd174821e\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:35.786229 kubelet[2510]: I0130 13:51:35.786150 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:35.786229 kubelet[2510]: I0130 13:51:35.786167 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:35.786472 kubelet[2510]: I0130 13:51:35.786179 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:35.786472 kubelet[2510]: I0130 13:51:35.786193 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:35.786472 kubelet[2510]: I0130 13:51:35.786237 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa277dd7fc7e4e24734af8edd174821e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa277dd7fc7e4e24734af8edd174821e\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:35.786472 kubelet[2510]: I0130 13:51:35.786271 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa277dd7fc7e4e24734af8edd174821e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa277dd7fc7e4e24734af8edd174821e\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:36.019310 kubelet[2510]: E0130 13:51:36.019258 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:36.019459 kubelet[2510]: E0130 13:51:36.019258 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:36.019459 kubelet[2510]: E0130 13:51:36.019273 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:36.579208 kubelet[2510]: I0130 13:51:36.579177 2510 apiserver.go:52] "Watching apiserver" Jan 30 13:51:36.585303 kubelet[2510]: I0130 13:51:36.585259 2510 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:51:36.620396 kubelet[2510]: E0130 13:51:36.620028 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:36.620396 kubelet[2510]: E0130 13:51:36.620093 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:37.059526 kubelet[2510]: E0130 13:51:37.059355 2510 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:37.059650 kubelet[2510]: E0130 13:51:37.059562 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:37.337763 kubelet[2510]: I0130 13:51:37.337456 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.337435188 podStartE2EDuration="3.337435188s" podCreationTimestamp="2025-01-30 13:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:37.048465594 +0000 UTC m=+1.529522819" watchObservedRunningTime="2025-01-30 13:51:37.337435188 +0000 UTC m=+1.818492403" Jan 30 13:51:37.337959 kubelet[2510]: I0130 13:51:37.337839 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.337831968 podStartE2EDuration="2.337831968s" podCreationTimestamp="2025-01-30 13:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:37.337409288 +0000 UTC m=+1.818466523" watchObservedRunningTime="2025-01-30 13:51:37.337831968 +0000 UTC m=+1.818889213" Jan 30 13:51:37.585174 kubelet[2510]: I0130 13:51:37.585100 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.585080562 podStartE2EDuration="3.585080562s" podCreationTimestamp="2025-01-30 13:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:37.585059021 +0000 UTC m=+2.066116236" watchObservedRunningTime="2025-01-30 13:51:37.585080562 +0000 UTC m=+2.066137777" Jan 30 13:51:37.621701 kubelet[2510]: E0130 13:51:37.621586 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:37.621701 kubelet[2510]: E0130 13:51:37.621681 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:38.819975 kubelet[2510]: E0130 13:51:38.819901 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:41.799056 kubelet[2510]: I0130 13:51:41.799030 2510 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:51:41.799888 containerd[1453]: time="2025-01-30T13:51:41.799812883Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:51:41.800312 kubelet[2510]: I0130 13:51:41.799952 2510 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:51:41.827109 sudo[1653]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:41.829065 sshd[1650]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:41.833192 systemd[1]: sshd@8-10.0.0.119:22-10.0.0.1:42054.service: Deactivated successfully. Jan 30 13:51:41.835217 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:51:41.835433 systemd[1]: session-9.scope: Consumed 4.123s CPU time, 157.0M memory peak, 0B memory swap peak. Jan 30 13:51:41.836275 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:51:41.837554 systemd-logind[1441]: Removed session 9. Jan 30 13:51:41.882884 kubelet[2510]: E0130 13:51:41.882849 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:42.629597 kubelet[2510]: E0130 13:51:42.629548 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:43.023036 systemd[1]: Created slice kubepods-besteffort-pod1152270e_f58e_4d80_880a_1878d307c0d9.slice - libcontainer container kubepods-besteffort-pod1152270e_f58e_4d80_880a_1878d307c0d9.slice. Jan 30 13:51:43.035851 kubelet[2510]: I0130 13:51:43.033990 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1152270e-f58e-4d80-880a-1878d307c0d9-kube-proxy\") pod \"kube-proxy-rxwvn\" (UID: \"1152270e-f58e-4d80-880a-1878d307c0d9\") " pod="kube-system/kube-proxy-rxwvn" Jan 30 13:51:43.035851 kubelet[2510]: I0130 13:51:43.034068 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1152270e-f58e-4d80-880a-1878d307c0d9-xtables-lock\") pod \"kube-proxy-rxwvn\" (UID: \"1152270e-f58e-4d80-880a-1878d307c0d9\") " pod="kube-system/kube-proxy-rxwvn" Jan 30 13:51:43.035851 kubelet[2510]: I0130 13:51:43.034958 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1152270e-f58e-4d80-880a-1878d307c0d9-lib-modules\") pod \"kube-proxy-rxwvn\" (UID: \"1152270e-f58e-4d80-880a-1878d307c0d9\") " pod="kube-system/kube-proxy-rxwvn" Jan 30 13:51:43.035851 kubelet[2510]: I0130 13:51:43.035051 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xqgg\" (UniqueName: \"kubernetes.io/projected/1152270e-f58e-4d80-880a-1878d307c0d9-kube-api-access-6xqgg\") pod \"kube-proxy-rxwvn\" (UID: \"1152270e-f58e-4d80-880a-1878d307c0d9\") " pod="kube-system/kube-proxy-rxwvn" Jan 30 13:51:43.052693 systemd[1]: Created slice kubepods-besteffort-pod3c267fc2_757c_4460_a103_d1f750748768.slice - libcontainer container kubepods-besteffort-pod3c267fc2_757c_4460_a103_d1f750748768.slice. Jan 30 13:51:43.135640 kubelet[2510]: I0130 13:51:43.135585 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z4pk\" (UniqueName: \"kubernetes.io/projected/3c267fc2-757c-4460-a103-d1f750748768-kube-api-access-6z4pk\") pod \"tigera-operator-76c4976dd7-mrks8\" (UID: \"3c267fc2-757c-4460-a103-d1f750748768\") " pod="tigera-operator/tigera-operator-76c4976dd7-mrks8" Jan 30 13:51:43.135640 kubelet[2510]: I0130 13:51:43.135651 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c267fc2-757c-4460-a103-d1f750748768-var-lib-calico\") pod \"tigera-operator-76c4976dd7-mrks8\" (UID: \"3c267fc2-757c-4460-a103-d1f750748768\") " pod="tigera-operator/tigera-operator-76c4976dd7-mrks8" Jan 30 13:51:43.333034 kubelet[2510]: E0130 13:51:43.333001 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:43.333846 containerd[1453]: time="2025-01-30T13:51:43.333529541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rxwvn,Uid:1152270e-f58e-4d80-880a-1878d307c0d9,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:43.355626 containerd[1453]: time="2025-01-30T13:51:43.355591122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-mrks8,Uid:3c267fc2-757c-4460-a103-d1f750748768,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:51:43.464746 containerd[1453]: time="2025-01-30T13:51:43.464631548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:43.464746 containerd[1453]: time="2025-01-30T13:51:43.464702953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:43.465535 containerd[1453]: time="2025-01-30T13:51:43.464723031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:43.465974 containerd[1453]: time="2025-01-30T13:51:43.465674873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:43.468644 containerd[1453]: time="2025-01-30T13:51:43.467923834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:43.468644 containerd[1453]: time="2025-01-30T13:51:43.468055975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:43.468644 containerd[1453]: time="2025-01-30T13:51:43.468071695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:43.468644 containerd[1453]: time="2025-01-30T13:51:43.468251267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:43.487936 systemd[1]: Started cri-containerd-c22967d99ec5bc448978096f64bc9f2325a9bc117a60c062905199aec95f288e.scope - libcontainer container c22967d99ec5bc448978096f64bc9f2325a9bc117a60c062905199aec95f288e. Jan 30 13:51:43.491045 systemd[1]: Started cri-containerd-9750c50c0e717e76b0f8dba703082813f0e2b5c68c85f05e7f9c489dd5fd362b.scope - libcontainer container 9750c50c0e717e76b0f8dba703082813f0e2b5c68c85f05e7f9c489dd5fd362b. Jan 30 13:51:43.517255 containerd[1453]: time="2025-01-30T13:51:43.517195752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rxwvn,Uid:1152270e-f58e-4d80-880a-1878d307c0d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c22967d99ec5bc448978096f64bc9f2325a9bc117a60c062905199aec95f288e\"" Jan 30 13:51:43.518226 kubelet[2510]: E0130 13:51:43.518201 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:43.520933 containerd[1453]: time="2025-01-30T13:51:43.520877720Z" level=info msg="CreateContainer within sandbox \"c22967d99ec5bc448978096f64bc9f2325a9bc117a60c062905199aec95f288e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:51:43.532609 containerd[1453]: time="2025-01-30T13:51:43.532544346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-mrks8,Uid:3c267fc2-757c-4460-a103-d1f750748768,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9750c50c0e717e76b0f8dba703082813f0e2b5c68c85f05e7f9c489dd5fd362b\"" Jan 30 13:51:43.533970 containerd[1453]: time="2025-01-30T13:51:43.533944831Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:51:43.870307 containerd[1453]: time="2025-01-30T13:51:43.870249328Z" level=info msg="CreateContainer within sandbox \"c22967d99ec5bc448978096f64bc9f2325a9bc117a60c062905199aec95f288e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e91802783d35432de31302c2320c13eff4dfaa8acb447cb4436d12d74934f21b\"" Jan 30 13:51:43.870861 containerd[1453]: time="2025-01-30T13:51:43.870828460Z" level=info msg="StartContainer for \"e91802783d35432de31302c2320c13eff4dfaa8acb447cb4436d12d74934f21b\"" Jan 30 13:51:43.896898 systemd[1]: Started cri-containerd-e91802783d35432de31302c2320c13eff4dfaa8acb447cb4436d12d74934f21b.scope - libcontainer container e91802783d35432de31302c2320c13eff4dfaa8acb447cb4436d12d74934f21b. Jan 30 13:51:43.924592 containerd[1453]: time="2025-01-30T13:51:43.924547905Z" level=info msg="StartContainer for \"e91802783d35432de31302c2320c13eff4dfaa8acb447cb4436d12d74934f21b\" returns successfully" Jan 30 13:51:44.634673 kubelet[2510]: E0130 13:51:44.634643 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:44.643526 kubelet[2510]: I0130 13:51:44.643454 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rxwvn" podStartSLOduration=2.643437683 podStartE2EDuration="2.643437683s" podCreationTimestamp="2025-01-30 13:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:44.643302797 +0000 UTC m=+9.124360012" watchObservedRunningTime="2025-01-30 13:51:44.643437683 +0000 UTC m=+9.124494898" Jan 30 13:51:45.284088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3823214622.mount: Deactivated successfully. Jan 30 13:51:45.580621 containerd[1453]: time="2025-01-30T13:51:45.580565840Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:45.581518 containerd[1453]: time="2025-01-30T13:51:45.581447536Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:51:45.582722 containerd[1453]: time="2025-01-30T13:51:45.582692310Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:45.585218 containerd[1453]: time="2025-01-30T13:51:45.585183013Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:45.585832 containerd[1453]: time="2025-01-30T13:51:45.585788893Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.051809757s" Jan 30 13:51:45.585832 containerd[1453]: time="2025-01-30T13:51:45.585822077Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:51:45.587441 containerd[1453]: time="2025-01-30T13:51:45.587409663Z" level=info msg="CreateContainer within sandbox \"9750c50c0e717e76b0f8dba703082813f0e2b5c68c85f05e7f9c489dd5fd362b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:51:45.603517 containerd[1453]: time="2025-01-30T13:51:45.603464556Z" level=info msg="CreateContainer within sandbox \"9750c50c0e717e76b0f8dba703082813f0e2b5c68c85f05e7f9c489dd5fd362b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d539fc011ed1c936559742ddb52f56b77cefba63b99fc90b2280fbf89357814c\"" Jan 30 13:51:45.604079 containerd[1453]: time="2025-01-30T13:51:45.604046983Z" level=info msg="StartContainer for \"d539fc011ed1c936559742ddb52f56b77cefba63b99fc90b2280fbf89357814c\"" Jan 30 13:51:45.631043 systemd[1]: Started cri-containerd-d539fc011ed1c936559742ddb52f56b77cefba63b99fc90b2280fbf89357814c.scope - libcontainer container d539fc011ed1c936559742ddb52f56b77cefba63b99fc90b2280fbf89357814c. Jan 30 13:51:45.639304 kubelet[2510]: E0130 13:51:45.639275 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:45.666022 containerd[1453]: time="2025-01-30T13:51:45.665808083Z" level=info msg="StartContainer for \"d539fc011ed1c936559742ddb52f56b77cefba63b99fc90b2280fbf89357814c\" returns successfully" Jan 30 13:51:46.390176 update_engine[1442]: I20250130 13:51:46.390051 1442 update_attempter.cc:509] Updating boot flags... Jan 30 13:51:46.414576 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2905) Jan 30 13:51:46.449797 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2724) Jan 30 13:51:46.483851 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2724) Jan 30 13:51:46.650209 kubelet[2510]: I0130 13:51:46.650047 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-mrks8" podStartSLOduration=1.5970678550000001 podStartE2EDuration="3.650030815s" podCreationTimestamp="2025-01-30 13:51:43 +0000 UTC" firstStartedPulling="2025-01-30 13:51:43.533483353 +0000 UTC m=+8.014540568" lastFinishedPulling="2025-01-30 13:51:45.586446313 +0000 UTC m=+10.067503528" observedRunningTime="2025-01-30 13:51:46.649721167 +0000 UTC m=+11.130778382" watchObservedRunningTime="2025-01-30 13:51:46.650030815 +0000 UTC m=+11.131088030" Jan 30 13:51:46.749345 kubelet[2510]: E0130 13:51:46.749314 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:48.734807 systemd[1]: Created slice kubepods-besteffort-pod72773b17_4413_492f_abe1_08cb3c318328.slice - libcontainer container kubepods-besteffort-pod72773b17_4413_492f_abe1_08cb3c318328.slice. Jan 30 13:51:48.770108 kubelet[2510]: I0130 13:51:48.770051 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72773b17-4413-492f-abe1-08cb3c318328-tigera-ca-bundle\") pod \"calico-typha-7f649885d-bzwcq\" (UID: \"72773b17-4413-492f-abe1-08cb3c318328\") " pod="calico-system/calico-typha-7f649885d-bzwcq" Jan 30 13:51:48.770873 kubelet[2510]: I0130 13:51:48.770758 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/72773b17-4413-492f-abe1-08cb3c318328-typha-certs\") pod \"calico-typha-7f649885d-bzwcq\" (UID: \"72773b17-4413-492f-abe1-08cb3c318328\") " pod="calico-system/calico-typha-7f649885d-bzwcq" Jan 30 13:51:48.770932 kubelet[2510]: I0130 13:51:48.770909 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s576\" (UniqueName: \"kubernetes.io/projected/72773b17-4413-492f-abe1-08cb3c318328-kube-api-access-4s576\") pod \"calico-typha-7f649885d-bzwcq\" (UID: \"72773b17-4413-492f-abe1-08cb3c318328\") " pod="calico-system/calico-typha-7f649885d-bzwcq" Jan 30 13:51:48.788011 systemd[1]: Created slice kubepods-besteffort-pod0d102ef4_ed98_4a66_ab86_90fef8a5a37e.slice - libcontainer container kubepods-besteffort-pod0d102ef4_ed98_4a66_ab86_90fef8a5a37e.slice. Jan 30 13:51:48.826515 kubelet[2510]: E0130 13:51:48.826107 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:48.871498 kubelet[2510]: I0130 13:51:48.871435 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0d102ef4-ed98-4a66-ab86-90fef8a5a37e-cni-bin-dir\") pod \"calico-node-r8ll8\" (UID: \"0d102ef4-ed98-4a66-ab86-90fef8a5a37e\") " pod="calico-system/calico-node-r8ll8" Jan 30 13:51:48.871498 kubelet[2510]: I0130 13:51:48.871489 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d102ef4-ed98-4a66-ab86-90fef8a5a37e-tigera-ca-bundle\") pod \"calico-node-r8ll8\" (UID: \"0d102ef4-ed98-4a66-ab86-90fef8a5a37e\") " pod="calico-system/calico-node-r8ll8" Jan 30 13:51:48.871498 kubelet[2510]: I0130 13:51:48.871509 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0d102ef4-ed98-4a66-ab86-90fef8a5a37e-node-certs\") pod \"calico-node-r8ll8\" (UID: \"0d102ef4-ed98-4a66-ab86-90fef8a5a37e\") " pod="calico-system/calico-node-r8ll8" Jan 30 13:51:48.871826 kubelet[2510]: I0130 13:51:48.871535 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7ghm\" (UniqueName: \"kubernetes.io/projected/0d102ef4-ed98-4a66-ab86-90fef8a5a37e-kube-api-access-c7ghm\") pod \"calico-node-r8ll8\" (UID: \"0d102ef4-ed98-4a66-ab86-90fef8a5a37e\") " pod="calico-system/calico-node-r8ll8" Jan 30 13:51:48.871826 kubelet[2510]: I0130 13:51:48.871557 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0d102ef4-ed98-4a66-ab86-90fef8a5a37e-var-run-calico\") pod \"calico-node-r8ll8\" (UID: \"0d102ef4-ed98-4a66-ab86-90fef8a5a37e\") " pod="calico-system/calico-node-r8ll8" Jan 30 13:51:48.871826 kubelet[2510]: I0130 13:51:48.871571 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0d102ef4-ed98-4a66-ab86-90fef8a5a37e-policysync\") pod \"calico-node-r8ll8\" (UID: \"0d102ef4-ed98-4a66-ab86-90fef8a5a37e\") " pod="calico-system/calico-node-r8ll8" Jan 30 13:51:48.871826 kubelet[2510]: I0130 13:51:48.871595 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d102ef4-ed98-4a66-ab86-90fef8a5a37e-lib-modules\") pod \"calico-node-r8ll8\" (UID: \"0d102ef4-ed98-4a66-ab86-90fef8a5a37e\") " pod="calico-system/calico-node-r8ll8" Jan 30 13:51:48.871826 kubelet[2510]: I0130 13:51:48.871614 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0d102ef4-ed98-4a66-ab86-90fef8a5a37e-var-lib-calico\") pod \"calico-node-r8ll8\" (UID: \"0d102ef4-ed98-4a66-ab86-90fef8a5a37e\") " pod="calico-system/calico-node-r8ll8" Jan 30 13:51:48.872171 kubelet[2510]: I0130 13:51:48.872141 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0d102ef4-ed98-4a66-ab86-90fef8a5a37e-cni-log-dir\") pod \"calico-node-r8ll8\" (UID: \"0d102ef4-ed98-4a66-ab86-90fef8a5a37e\") " pod="calico-system/calico-node-r8ll8" Jan 30 13:51:48.872171 kubelet[2510]: I0130 13:51:48.872166 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0d102ef4-ed98-4a66-ab86-90fef8a5a37e-flexvol-driver-host\") pod \"calico-node-r8ll8\" (UID: \"0d102ef4-ed98-4a66-ab86-90fef8a5a37e\") " pod="calico-system/calico-node-r8ll8" Jan 30 13:51:48.872171 kubelet[2510]: I0130 13:51:48.872192 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d102ef4-ed98-4a66-ab86-90fef8a5a37e-xtables-lock\") pod \"calico-node-r8ll8\" (UID: \"0d102ef4-ed98-4a66-ab86-90fef8a5a37e\") " pod="calico-system/calico-node-r8ll8" Jan 30 13:51:48.872457 kubelet[2510]: I0130 13:51:48.872208 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0d102ef4-ed98-4a66-ab86-90fef8a5a37e-cni-net-dir\") pod \"calico-node-r8ll8\" (UID: \"0d102ef4-ed98-4a66-ab86-90fef8a5a37e\") " pod="calico-system/calico-node-r8ll8" Jan 30 13:51:48.889828 kubelet[2510]: E0130 13:51:48.889751 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lzkrl" podUID="671740aa-5720-4131-9f78-6538b2c8e710" Jan 30 13:51:48.973658 kubelet[2510]: I0130 13:51:48.973323 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/671740aa-5720-4131-9f78-6538b2c8e710-varrun\") pod \"csi-node-driver-lzkrl\" (UID: \"671740aa-5720-4131-9f78-6538b2c8e710\") " pod="calico-system/csi-node-driver-lzkrl" Jan 30 13:51:48.973658 kubelet[2510]: I0130 13:51:48.973368 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/671740aa-5720-4131-9f78-6538b2c8e710-registration-dir\") pod \"csi-node-driver-lzkrl\" (UID: \"671740aa-5720-4131-9f78-6538b2c8e710\") " pod="calico-system/csi-node-driver-lzkrl" Jan 30 13:51:48.973658 kubelet[2510]: I0130 13:51:48.973426 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/671740aa-5720-4131-9f78-6538b2c8e710-kubelet-dir\") pod \"csi-node-driver-lzkrl\" (UID: \"671740aa-5720-4131-9f78-6538b2c8e710\") " pod="calico-system/csi-node-driver-lzkrl" Jan 30 13:51:48.973658 kubelet[2510]: I0130 13:51:48.973453 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/671740aa-5720-4131-9f78-6538b2c8e710-socket-dir\") pod \"csi-node-driver-lzkrl\" (UID: \"671740aa-5720-4131-9f78-6538b2c8e710\") " pod="calico-system/csi-node-driver-lzkrl" Jan 30 13:51:48.973658 kubelet[2510]: I0130 13:51:48.973467 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6q2l\" (UniqueName: \"kubernetes.io/projected/671740aa-5720-4131-9f78-6538b2c8e710-kube-api-access-d6q2l\") pod \"csi-node-driver-lzkrl\" (UID: \"671740aa-5720-4131-9f78-6538b2c8e710\") " pod="calico-system/csi-node-driver-lzkrl" Jan 30 13:51:48.981787 kubelet[2510]: E0130 13:51:48.979246 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:48.981787 kubelet[2510]: W0130 13:51:48.979273 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:48.981787 kubelet[2510]: E0130 13:51:48.979296 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:48.984343 kubelet[2510]: E0130 13:51:48.984308 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:48.984343 kubelet[2510]: W0130 13:51:48.984333 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:48.984343 kubelet[2510]: E0130 13:51:48.984355 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.041559 kubelet[2510]: E0130 13:51:49.041430 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:49.041945 containerd[1453]: time="2025-01-30T13:51:49.041911236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f649885d-bzwcq,Uid:72773b17-4413-492f-abe1-08cb3c318328,Namespace:calico-system,Attempt:0,}" Jan 30 13:51:49.074205 kubelet[2510]: E0130 13:51:49.074170 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.074205 kubelet[2510]: W0130 13:51:49.074193 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.074319 kubelet[2510]: E0130 13:51:49.074212 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.074584 kubelet[2510]: E0130 13:51:49.074562 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.074584 kubelet[2510]: W0130 13:51:49.074574 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.074630 kubelet[2510]: E0130 13:51:49.074587 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.074874 kubelet[2510]: E0130 13:51:49.074853 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.074874 kubelet[2510]: W0130 13:51:49.074865 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.074922 kubelet[2510]: E0130 13:51:49.074878 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.075178 kubelet[2510]: E0130 13:51:49.075156 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.075178 kubelet[2510]: W0130 13:51:49.075168 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.075236 kubelet[2510]: E0130 13:51:49.075181 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.075446 kubelet[2510]: E0130 13:51:49.075431 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.075446 kubelet[2510]: W0130 13:51:49.075442 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.075505 kubelet[2510]: E0130 13:51:49.075456 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.075782 kubelet[2510]: E0130 13:51:49.075735 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.075782 kubelet[2510]: W0130 13:51:49.075763 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.075843 kubelet[2510]: E0130 13:51:49.075806 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.076050 kubelet[2510]: E0130 13:51:49.076031 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.076050 kubelet[2510]: W0130 13:51:49.076045 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.076149 kubelet[2510]: E0130 13:51:49.076098 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.076412 kubelet[2510]: E0130 13:51:49.076390 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.076443 kubelet[2510]: W0130 13:51:49.076411 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.076493 kubelet[2510]: E0130 13:51:49.076470 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.076682 kubelet[2510]: E0130 13:51:49.076657 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.076682 kubelet[2510]: W0130 13:51:49.076669 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.076907 kubelet[2510]: E0130 13:51:49.076706 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.076942 kubelet[2510]: E0130 13:51:49.076926 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.076969 kubelet[2510]: W0130 13:51:49.076944 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.077020 kubelet[2510]: E0130 13:51:49.076993 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.077243 kubelet[2510]: E0130 13:51:49.077226 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.077243 kubelet[2510]: W0130 13:51:49.077239 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.077298 kubelet[2510]: E0130 13:51:49.077274 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.077468 kubelet[2510]: E0130 13:51:49.077450 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.077468 kubelet[2510]: W0130 13:51:49.077464 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.077536 kubelet[2510]: E0130 13:51:49.077494 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.077754 kubelet[2510]: E0130 13:51:49.077729 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.077754 kubelet[2510]: W0130 13:51:49.077744 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.077830 kubelet[2510]: E0130 13:51:49.077789 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.078036 kubelet[2510]: E0130 13:51:49.078019 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.078036 kubelet[2510]: W0130 13:51:49.078033 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.078162 kubelet[2510]: E0130 13:51:49.078067 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.078291 kubelet[2510]: E0130 13:51:49.078276 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.078291 kubelet[2510]: W0130 13:51:49.078288 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.078371 kubelet[2510]: E0130 13:51:49.078346 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.078552 kubelet[2510]: E0130 13:51:49.078534 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.078598 kubelet[2510]: W0130 13:51:49.078551 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.078598 kubelet[2510]: E0130 13:51:49.078586 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.078833 kubelet[2510]: E0130 13:51:49.078816 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.078833 kubelet[2510]: W0130 13:51:49.078831 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.078900 kubelet[2510]: E0130 13:51:49.078861 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.079147 kubelet[2510]: E0130 13:51:49.079127 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.079147 kubelet[2510]: W0130 13:51:49.079142 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.079237 kubelet[2510]: E0130 13:51:49.079184 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.079446 kubelet[2510]: E0130 13:51:49.079421 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.079446 kubelet[2510]: W0130 13:51:49.079432 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.079556 kubelet[2510]: E0130 13:51:49.079521 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.079730 kubelet[2510]: E0130 13:51:49.079703 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.079730 kubelet[2510]: W0130 13:51:49.079716 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.080536 kubelet[2510]: E0130 13:51:49.079777 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.080536 kubelet[2510]: E0130 13:51:49.079963 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.080536 kubelet[2510]: W0130 13:51:49.079970 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.080536 kubelet[2510]: E0130 13:51:49.080008 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.080536 kubelet[2510]: E0130 13:51:49.080198 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.080536 kubelet[2510]: W0130 13:51:49.080206 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.080536 kubelet[2510]: E0130 13:51:49.080307 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.080536 kubelet[2510]: E0130 13:51:49.080443 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.080536 kubelet[2510]: W0130 13:51:49.080451 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.080536 kubelet[2510]: E0130 13:51:49.080468 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.080943 kubelet[2510]: E0130 13:51:49.080693 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.080943 kubelet[2510]: W0130 13:51:49.080703 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.080943 kubelet[2510]: E0130 13:51:49.080718 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.081056 kubelet[2510]: E0130 13:51:49.080955 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.081056 kubelet[2510]: W0130 13:51:49.080962 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.081056 kubelet[2510]: E0130 13:51:49.080970 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.091347 kubelet[2510]: E0130 13:51:49.091307 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:49.091347 kubelet[2510]: W0130 13:51:49.091330 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:49.091347 kubelet[2510]: E0130 13:51:49.091354 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:49.091627 kubelet[2510]: E0130 13:51:49.091557 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:49.093107 containerd[1453]: time="2025-01-30T13:51:49.093064317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r8ll8,Uid:0d102ef4-ed98-4a66-ab86-90fef8a5a37e,Namespace:calico-system,Attempt:0,}" Jan 30 13:51:49.106553 containerd[1453]: time="2025-01-30T13:51:49.106300100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:49.106553 containerd[1453]: time="2025-01-30T13:51:49.106360254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:49.106553 containerd[1453]: time="2025-01-30T13:51:49.106373008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:49.106553 containerd[1453]: time="2025-01-30T13:51:49.106466675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:49.131931 systemd[1]: Started cri-containerd-643dc189dccf1c2c360e743450028373e926bbf04ca37b517a6f3d94831186b2.scope - libcontainer container 643dc189dccf1c2c360e743450028373e926bbf04ca37b517a6f3d94831186b2. Jan 30 13:51:49.170565 containerd[1453]: time="2025-01-30T13:51:49.170492401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f649885d-bzwcq,Uid:72773b17-4413-492f-abe1-08cb3c318328,Namespace:calico-system,Attempt:0,} returns sandbox id \"643dc189dccf1c2c360e743450028373e926bbf04ca37b517a6f3d94831186b2\"" Jan 30 13:51:49.171438 kubelet[2510]: E0130 13:51:49.171403 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:49.172512 containerd[1453]: time="2025-01-30T13:51:49.172474807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:51:49.259520 containerd[1453]: time="2025-01-30T13:51:49.258839529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:49.259520 containerd[1453]: time="2025-01-30T13:51:49.259472228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:49.259520 containerd[1453]: time="2025-01-30T13:51:49.259488329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:49.259872 containerd[1453]: time="2025-01-30T13:51:49.259573349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:49.278914 systemd[1]: Started cri-containerd-bf3ef58a566962cb7ac3d718626b831b746c206f168e594c3ad2df776dbebf6a.scope - libcontainer container bf3ef58a566962cb7ac3d718626b831b746c206f168e594c3ad2df776dbebf6a. Jan 30 13:51:49.307998 containerd[1453]: time="2025-01-30T13:51:49.307810677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r8ll8,Uid:0d102ef4-ed98-4a66-ab86-90fef8a5a37e,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf3ef58a566962cb7ac3d718626b831b746c206f168e594c3ad2df776dbebf6a\"" Jan 30 13:51:49.309072 kubelet[2510]: E0130 13:51:49.309002 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:50.610794 kubelet[2510]: E0130 13:51:50.610708 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lzkrl" podUID="671740aa-5720-4131-9f78-6538b2c8e710" Jan 30 13:51:51.869143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2343435169.mount: Deactivated successfully. Jan 30 13:51:52.522268 containerd[1453]: time="2025-01-30T13:51:52.522207244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:52.523146 containerd[1453]: time="2025-01-30T13:51:52.523106565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 13:51:52.524184 containerd[1453]: time="2025-01-30T13:51:52.524141511Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:52.526386 containerd[1453]: time="2025-01-30T13:51:52.526352282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:52.526985 containerd[1453]: time="2025-01-30T13:51:52.526936347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.354426334s" Jan 30 13:51:52.526985 containerd[1453]: time="2025-01-30T13:51:52.526981462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:51:52.528078 containerd[1453]: time="2025-01-30T13:51:52.528049902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:51:52.534699 containerd[1453]: time="2025-01-30T13:51:52.534660553Z" level=info msg="CreateContainer within sandbox \"643dc189dccf1c2c360e743450028373e926bbf04ca37b517a6f3d94831186b2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:51:52.552566 containerd[1453]: time="2025-01-30T13:51:52.552510597Z" level=info msg="CreateContainer within sandbox \"643dc189dccf1c2c360e743450028373e926bbf04ca37b517a6f3d94831186b2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f8c234a775822b129cd631c1ad0938a030e1ae28f3d185b34c371ed855fd68cc\"" Jan 30 13:51:52.553129 containerd[1453]: time="2025-01-30T13:51:52.553079372Z" level=info msg="StartContainer for \"f8c234a775822b129cd631c1ad0938a030e1ae28f3d185b34c371ed855fd68cc\"" Jan 30 13:51:52.582015 systemd[1]: Started cri-containerd-f8c234a775822b129cd631c1ad0938a030e1ae28f3d185b34c371ed855fd68cc.scope - libcontainer container f8c234a775822b129cd631c1ad0938a030e1ae28f3d185b34c371ed855fd68cc. Jan 30 13:51:52.611066 kubelet[2510]: E0130 13:51:52.611013 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lzkrl" podUID="671740aa-5720-4131-9f78-6538b2c8e710" Jan 30 13:51:52.622463 containerd[1453]: time="2025-01-30T13:51:52.622403494Z" level=info msg="StartContainer for \"f8c234a775822b129cd631c1ad0938a030e1ae28f3d185b34c371ed855fd68cc\" returns successfully" Jan 30 13:51:52.657159 kubelet[2510]: E0130 13:51:52.657124 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:52.668074 kubelet[2510]: I0130 13:51:52.667684 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f649885d-bzwcq" podStartSLOduration=1.3119028529999999 podStartE2EDuration="4.667662268s" podCreationTimestamp="2025-01-30 13:51:48 +0000 UTC" firstStartedPulling="2025-01-30 13:51:49.172090218 +0000 UTC m=+13.653147433" lastFinishedPulling="2025-01-30 13:51:52.527849623 +0000 UTC m=+17.008906848" observedRunningTime="2025-01-30 13:51:52.667566105 +0000 UTC m=+17.148623321" watchObservedRunningTime="2025-01-30 13:51:52.667662268 +0000 UTC m=+17.148719493" Jan 30 13:51:52.685811 kubelet[2510]: E0130 13:51:52.684995 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.685811 kubelet[2510]: W0130 13:51:52.685041 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.685811 kubelet[2510]: E0130 13:51:52.685063 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.685811 kubelet[2510]: E0130 13:51:52.685316 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.685811 kubelet[2510]: W0130 13:51:52.685324 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.685811 kubelet[2510]: E0130 13:51:52.685332 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.685811 kubelet[2510]: E0130 13:51:52.685593 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.685811 kubelet[2510]: W0130 13:51:52.685601 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.685811 kubelet[2510]: E0130 13:51:52.685609 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.685811 kubelet[2510]: E0130 13:51:52.685810 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.686298 kubelet[2510]: W0130 13:51:52.685817 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.686298 kubelet[2510]: E0130 13:51:52.685825 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.686298 kubelet[2510]: E0130 13:51:52.686062 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.686298 kubelet[2510]: W0130 13:51:52.686071 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.686298 kubelet[2510]: E0130 13:51:52.686081 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.686466 kubelet[2510]: E0130 13:51:52.686330 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.686466 kubelet[2510]: W0130 13:51:52.686340 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.686466 kubelet[2510]: E0130 13:51:52.686348 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.686601 kubelet[2510]: E0130 13:51:52.686568 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.686601 kubelet[2510]: W0130 13:51:52.686589 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.686601 kubelet[2510]: E0130 13:51:52.686598 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.687793 kubelet[2510]: E0130 13:51:52.686876 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.687793 kubelet[2510]: W0130 13:51:52.686888 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.687793 kubelet[2510]: E0130 13:51:52.686897 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.689982 kubelet[2510]: E0130 13:51:52.689949 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.689982 kubelet[2510]: W0130 13:51:52.689969 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.689982 kubelet[2510]: E0130 13:51:52.689980 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.690220 kubelet[2510]: E0130 13:51:52.690196 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.690220 kubelet[2510]: W0130 13:51:52.690210 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.690220 kubelet[2510]: E0130 13:51:52.690221 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.690438 kubelet[2510]: E0130 13:51:52.690413 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.690438 kubelet[2510]: W0130 13:51:52.690428 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.690438 kubelet[2510]: E0130 13:51:52.690437 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.691194 kubelet[2510]: E0130 13:51:52.691157 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.691194 kubelet[2510]: W0130 13:51:52.691181 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.691194 kubelet[2510]: E0130 13:51:52.691192 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.691464 kubelet[2510]: E0130 13:51:52.691439 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.691464 kubelet[2510]: W0130 13:51:52.691455 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.691464 kubelet[2510]: E0130 13:51:52.691465 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.695800 kubelet[2510]: E0130 13:51:52.694042 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.695800 kubelet[2510]: W0130 13:51:52.694059 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.695800 kubelet[2510]: E0130 13:51:52.694071 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.695800 kubelet[2510]: E0130 13:51:52.694314 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.695800 kubelet[2510]: W0130 13:51:52.694322 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.695800 kubelet[2510]: E0130 13:51:52.694330 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.697840 kubelet[2510]: E0130 13:51:52.697806 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.697840 kubelet[2510]: W0130 13:51:52.697827 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.697840 kubelet[2510]: E0130 13:51:52.697838 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.698127 kubelet[2510]: E0130 13:51:52.698101 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.698127 kubelet[2510]: W0130 13:51:52.698117 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.698127 kubelet[2510]: E0130 13:51:52.698130 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.698379 kubelet[2510]: E0130 13:51:52.698354 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.698379 kubelet[2510]: W0130 13:51:52.698369 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.698379 kubelet[2510]: E0130 13:51:52.698379 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.699004 kubelet[2510]: E0130 13:51:52.698979 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.699004 kubelet[2510]: W0130 13:51:52.698995 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.699004 kubelet[2510]: E0130 13:51:52.699008 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.700834 kubelet[2510]: E0130 13:51:52.700806 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.700834 kubelet[2510]: W0130 13:51:52.700824 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.700933 kubelet[2510]: E0130 13:51:52.700881 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.701133 kubelet[2510]: E0130 13:51:52.701108 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.701133 kubelet[2510]: W0130 13:51:52.701124 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.701222 kubelet[2510]: E0130 13:51:52.701175 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.701385 kubelet[2510]: E0130 13:51:52.701362 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.701385 kubelet[2510]: W0130 13:51:52.701376 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.701460 kubelet[2510]: E0130 13:51:52.701424 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.702001 kubelet[2510]: E0130 13:51:52.701623 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.702001 kubelet[2510]: W0130 13:51:52.701638 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.702001 kubelet[2510]: E0130 13:51:52.701654 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.702001 kubelet[2510]: E0130 13:51:52.701917 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.702001 kubelet[2510]: W0130 13:51:52.701927 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.702001 kubelet[2510]: E0130 13:51:52.701952 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.707803 kubelet[2510]: E0130 13:51:52.705119 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.707803 kubelet[2510]: W0130 13:51:52.705138 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.707803 kubelet[2510]: E0130 13:51:52.705154 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.707803 kubelet[2510]: E0130 13:51:52.705431 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.707803 kubelet[2510]: W0130 13:51:52.705441 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.707803 kubelet[2510]: E0130 13:51:52.705532 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.707803 kubelet[2510]: E0130 13:51:52.705639 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.707803 kubelet[2510]: W0130 13:51:52.705647 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.707803 kubelet[2510]: E0130 13:51:52.705750 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.707803 kubelet[2510]: E0130 13:51:52.705853 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.708263 kubelet[2510]: W0130 13:51:52.705861 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.708263 kubelet[2510]: E0130 13:51:52.705873 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.708263 kubelet[2510]: E0130 13:51:52.706104 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.708263 kubelet[2510]: W0130 13:51:52.706113 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.708263 kubelet[2510]: E0130 13:51:52.706126 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.708263 kubelet[2510]: E0130 13:51:52.706512 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.708263 kubelet[2510]: W0130 13:51:52.706519 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.708263 kubelet[2510]: E0130 13:51:52.706540 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.708263 kubelet[2510]: E0130 13:51:52.706827 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.708263 kubelet[2510]: W0130 13:51:52.706835 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.708578 kubelet[2510]: E0130 13:51:52.706843 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.708578 kubelet[2510]: E0130 13:51:52.707047 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.708578 kubelet[2510]: W0130 13:51:52.707055 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.708578 kubelet[2510]: E0130 13:51:52.707063 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.708578 kubelet[2510]: E0130 13:51:52.707986 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.708578 kubelet[2510]: W0130 13:51:52.707996 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.708578 kubelet[2510]: E0130 13:51:52.708007 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.658461 kubelet[2510]: I0130 13:51:53.658419 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:51:53.658975 kubelet[2510]: E0130 13:51:53.658738 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:53.702614 kubelet[2510]: E0130 13:51:53.702580 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.702614 kubelet[2510]: W0130 13:51:53.702603 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.702817 kubelet[2510]: E0130 13:51:53.702625 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.702903 kubelet[2510]: E0130 13:51:53.702885 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.702903 kubelet[2510]: W0130 13:51:53.702899 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.702958 kubelet[2510]: E0130 13:51:53.702908 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.703388 kubelet[2510]: E0130 13:51:53.703209 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.703388 kubelet[2510]: W0130 13:51:53.703235 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.703388 kubelet[2510]: E0130 13:51:53.703262 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.703592 kubelet[2510]: E0130 13:51:53.703580 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.703647 kubelet[2510]: W0130 13:51:53.703637 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.703700 kubelet[2510]: E0130 13:51:53.703690 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.703964 kubelet[2510]: E0130 13:51:53.703953 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.704029 kubelet[2510]: W0130 13:51:53.704015 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.704029 kubelet[2510]: E0130 13:51:53.704028 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.704249 kubelet[2510]: E0130 13:51:53.704238 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.704249 kubelet[2510]: W0130 13:51:53.704247 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.704299 kubelet[2510]: E0130 13:51:53.704255 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.704459 kubelet[2510]: E0130 13:51:53.704448 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.704487 kubelet[2510]: W0130 13:51:53.704457 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.704487 kubelet[2510]: E0130 13:51:53.704476 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.704651 kubelet[2510]: E0130 13:51:53.704640 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.704651 kubelet[2510]: W0130 13:51:53.704648 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.704702 kubelet[2510]: E0130 13:51:53.704656 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.704899 kubelet[2510]: E0130 13:51:53.704874 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.704899 kubelet[2510]: W0130 13:51:53.704886 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.704899 kubelet[2510]: E0130 13:51:53.704897 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.705108 kubelet[2510]: E0130 13:51:53.705097 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.705108 kubelet[2510]: W0130 13:51:53.705106 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.705165 kubelet[2510]: E0130 13:51:53.705115 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.705352 kubelet[2510]: E0130 13:51:53.705334 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.705352 kubelet[2510]: W0130 13:51:53.705348 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.705421 kubelet[2510]: E0130 13:51:53.705359 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.705594 kubelet[2510]: E0130 13:51:53.705569 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.705594 kubelet[2510]: W0130 13:51:53.705591 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.705661 kubelet[2510]: E0130 13:51:53.705600 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.705830 kubelet[2510]: E0130 13:51:53.705818 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.705876 kubelet[2510]: W0130 13:51:53.705829 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.705876 kubelet[2510]: E0130 13:51:53.705840 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.706071 kubelet[2510]: E0130 13:51:53.706060 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.706071 kubelet[2510]: W0130 13:51:53.706070 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.706135 kubelet[2510]: E0130 13:51:53.706078 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.706278 kubelet[2510]: E0130 13:51:53.706267 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.706308 kubelet[2510]: W0130 13:51:53.706278 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.706308 kubelet[2510]: E0130 13:51:53.706298 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.706568 kubelet[2510]: E0130 13:51:53.706543 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.706568 kubelet[2510]: W0130 13:51:53.706558 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.706568 kubelet[2510]: E0130 13:51:53.706567 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.706805 kubelet[2510]: E0130 13:51:53.706761 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.706805 kubelet[2510]: W0130 13:51:53.706791 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.706805 kubelet[2510]: E0130 13:51:53.706807 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.707024 kubelet[2510]: E0130 13:51:53.707005 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.707059 kubelet[2510]: W0130 13:51:53.707028 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.707059 kubelet[2510]: E0130 13:51:53.707044 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.707308 kubelet[2510]: E0130 13:51:53.707289 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.707308 kubelet[2510]: W0130 13:51:53.707300 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.707356 kubelet[2510]: E0130 13:51:53.707313 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.707527 kubelet[2510]: E0130 13:51:53.707509 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.707527 kubelet[2510]: W0130 13:51:53.707519 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.707573 kubelet[2510]: E0130 13:51:53.707529 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.707743 kubelet[2510]: E0130 13:51:53.707713 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.707743 kubelet[2510]: W0130 13:51:53.707735 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.707804 kubelet[2510]: E0130 13:51:53.707749 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.707994 kubelet[2510]: E0130 13:51:53.707975 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.707994 kubelet[2510]: W0130 13:51:53.707986 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.708045 kubelet[2510]: E0130 13:51:53.708015 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.708271 kubelet[2510]: E0130 13:51:53.708241 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.708271 kubelet[2510]: W0130 13:51:53.708267 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.708355 kubelet[2510]: E0130 13:51:53.708310 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.708559 kubelet[2510]: E0130 13:51:53.708544 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.708559 kubelet[2510]: W0130 13:51:53.708556 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.708607 kubelet[2510]: E0130 13:51:53.708581 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.708737 kubelet[2510]: E0130 13:51:53.708722 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.708737 kubelet[2510]: W0130 13:51:53.708734 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.708810 kubelet[2510]: E0130 13:51:53.708746 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.708992 kubelet[2510]: E0130 13:51:53.708976 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.708992 kubelet[2510]: W0130 13:51:53.708988 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.709048 kubelet[2510]: E0130 13:51:53.709001 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.709218 kubelet[2510]: E0130 13:51:53.709194 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.709218 kubelet[2510]: W0130 13:51:53.709204 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.709218 kubelet[2510]: E0130 13:51:53.709218 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.709459 kubelet[2510]: E0130 13:51:53.709439 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.709459 kubelet[2510]: W0130 13:51:53.709456 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.709570 kubelet[2510]: E0130 13:51:53.709492 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.709715 kubelet[2510]: E0130 13:51:53.709701 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.709715 kubelet[2510]: W0130 13:51:53.709712 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.709761 kubelet[2510]: E0130 13:51:53.709740 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.710225 kubelet[2510]: E0130 13:51:53.710204 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.710225 kubelet[2510]: W0130 13:51:53.710216 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.710277 kubelet[2510]: E0130 13:51:53.710231 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.710512 kubelet[2510]: E0130 13:51:53.710490 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.710512 kubelet[2510]: W0130 13:51:53.710505 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.710603 kubelet[2510]: E0130 13:51:53.710523 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.710832 kubelet[2510]: E0130 13:51:53.710812 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.710832 kubelet[2510]: W0130 13:51:53.710827 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.710904 kubelet[2510]: E0130 13:51:53.710837 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.711138 kubelet[2510]: E0130 13:51:53.711121 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.711138 kubelet[2510]: W0130 13:51:53.711135 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.711197 kubelet[2510]: E0130 13:51:53.711147 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.782176 containerd[1453]: time="2025-01-30T13:51:53.782120491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:53.783135 containerd[1453]: time="2025-01-30T13:51:53.783094472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 13:51:53.784383 containerd[1453]: time="2025-01-30T13:51:53.784340767Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:53.786590 containerd[1453]: time="2025-01-30T13:51:53.786554211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:53.787189 containerd[1453]: time="2025-01-30T13:51:53.787140459Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.25906104s" Jan 30 13:51:53.787216 containerd[1453]: time="2025-01-30T13:51:53.787184893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:51:53.788893 containerd[1453]: time="2025-01-30T13:51:53.788855321Z" level=info msg="CreateContainer within sandbox \"bf3ef58a566962cb7ac3d718626b831b746c206f168e594c3ad2df776dbebf6a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:51:53.804614 containerd[1453]: time="2025-01-30T13:51:53.804571514Z" level=info msg="CreateContainer within sandbox \"bf3ef58a566962cb7ac3d718626b831b746c206f168e594c3ad2df776dbebf6a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"07da80a8d2c58c11cfb606323d377d37e13f099d0a495a92dde2ffa9b35eb260\"" Jan 30 13:51:53.805050 containerd[1453]: time="2025-01-30T13:51:53.805015593Z" level=info msg="StartContainer for \"07da80a8d2c58c11cfb606323d377d37e13f099d0a495a92dde2ffa9b35eb260\"" Jan 30 13:51:53.834907 systemd[1]: Started cri-containerd-07da80a8d2c58c11cfb606323d377d37e13f099d0a495a92dde2ffa9b35eb260.scope - libcontainer container 07da80a8d2c58c11cfb606323d377d37e13f099d0a495a92dde2ffa9b35eb260. Jan 30 13:51:53.867072 containerd[1453]: time="2025-01-30T13:51:53.867022968Z" level=info msg="StartContainer for \"07da80a8d2c58c11cfb606323d377d37e13f099d0a495a92dde2ffa9b35eb260\" returns successfully" Jan 30 13:51:53.884423 systemd[1]: cri-containerd-07da80a8d2c58c11cfb606323d377d37e13f099d0a495a92dde2ffa9b35eb260.scope: Deactivated successfully. Jan 30 13:51:53.909490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07da80a8d2c58c11cfb606323d377d37e13f099d0a495a92dde2ffa9b35eb260-rootfs.mount: Deactivated successfully. Jan 30 13:51:54.318138 containerd[1453]: time="2025-01-30T13:51:54.318084809Z" level=info msg="shim disconnected" id=07da80a8d2c58c11cfb606323d377d37e13f099d0a495a92dde2ffa9b35eb260 namespace=k8s.io Jan 30 13:51:54.318138 containerd[1453]: time="2025-01-30T13:51:54.318138300Z" level=warning msg="cleaning up after shim disconnected" id=07da80a8d2c58c11cfb606323d377d37e13f099d0a495a92dde2ffa9b35eb260 namespace=k8s.io Jan 30 13:51:54.318357 containerd[1453]: time="2025-01-30T13:51:54.318156664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:51:54.611112 kubelet[2510]: E0130 13:51:54.610911 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lzkrl" podUID="671740aa-5720-4131-9f78-6538b2c8e710" Jan 30 13:51:54.662800 kubelet[2510]: E0130 13:51:54.662746 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:54.664117 containerd[1453]: time="2025-01-30T13:51:54.664072213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:51:56.610706 kubelet[2510]: E0130 13:51:56.610659 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lzkrl" podUID="671740aa-5720-4131-9f78-6538b2c8e710" Jan 30 13:51:58.611280 kubelet[2510]: E0130 13:51:58.611224 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lzkrl" podUID="671740aa-5720-4131-9f78-6538b2c8e710" Jan 30 13:51:59.551928 containerd[1453]: time="2025-01-30T13:51:59.551870890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:59.552822 containerd[1453]: time="2025-01-30T13:51:59.552786075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:51:59.553926 containerd[1453]: time="2025-01-30T13:51:59.553889475Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:59.556397 containerd[1453]: time="2025-01-30T13:51:59.556361156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:59.557032 containerd[1453]: time="2025-01-30T13:51:59.557000250Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.892879045s" Jan 30 13:51:59.557087 containerd[1453]: time="2025-01-30T13:51:59.557037010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:51:59.560099 containerd[1453]: time="2025-01-30T13:51:59.560064197Z" level=info msg="CreateContainer within sandbox \"bf3ef58a566962cb7ac3d718626b831b746c206f168e594c3ad2df776dbebf6a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:51:59.576306 containerd[1453]: time="2025-01-30T13:51:59.576253128Z" level=info msg="CreateContainer within sandbox \"bf3ef58a566962cb7ac3d718626b831b746c206f168e594c3ad2df776dbebf6a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"eadd3a9275f9fd983d870965b78998ccb2a8c11a26f548e6892431af3552672d\"" Jan 30 13:51:59.576874 containerd[1453]: time="2025-01-30T13:51:59.576838111Z" level=info msg="StartContainer for \"eadd3a9275f9fd983d870965b78998ccb2a8c11a26f548e6892431af3552672d\"" Jan 30 13:51:59.624990 systemd[1]: Started cri-containerd-eadd3a9275f9fd983d870965b78998ccb2a8c11a26f548e6892431af3552672d.scope - libcontainer container eadd3a9275f9fd983d870965b78998ccb2a8c11a26f548e6892431af3552672d. Jan 30 13:51:59.709730 containerd[1453]: time="2025-01-30T13:51:59.709652356Z" level=info msg="StartContainer for \"eadd3a9275f9fd983d870965b78998ccb2a8c11a26f548e6892431af3552672d\" returns successfully" Jan 30 13:52:00.610394 kubelet[2510]: E0130 13:52:00.610349 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lzkrl" podUID="671740aa-5720-4131-9f78-6538b2c8e710" Jan 30 13:52:00.714931 kubelet[2510]: E0130 13:52:00.714884 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:01.716498 kubelet[2510]: E0130 13:52:01.716270 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:01.825286 containerd[1453]: time="2025-01-30T13:52:01.825230834Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:52:01.828405 systemd[1]: cri-containerd-eadd3a9275f9fd983d870965b78998ccb2a8c11a26f548e6892431af3552672d.scope: Deactivated successfully. Jan 30 13:52:01.848622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eadd3a9275f9fd983d870965b78998ccb2a8c11a26f548e6892431af3552672d-rootfs.mount: Deactivated successfully. Jan 30 13:52:01.894224 kubelet[2510]: I0130 13:52:01.894188 2510 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:52:02.205046 systemd[1]: Created slice kubepods-burstable-pod83cb02d6_877c_42c9_9de7_939337ef1dd0.slice - libcontainer container kubepods-burstable-pod83cb02d6_877c_42c9_9de7_939337ef1dd0.slice. Jan 30 13:52:02.209796 systemd[1]: Created slice kubepods-besteffort-pod4ef025b4_c416_42de_a536_3742569ad063.slice - libcontainer container kubepods-besteffort-pod4ef025b4_c416_42de_a536_3742569ad063.slice. Jan 30 13:52:02.214552 systemd[1]: Created slice kubepods-burstable-podce9daa1b_6e98_4839_9f3b_00c7bb80d288.slice - libcontainer container kubepods-burstable-podce9daa1b_6e98_4839_9f3b_00c7bb80d288.slice. Jan 30 13:52:02.218065 systemd[1]: Created slice kubepods-besteffort-pode495de51_4d7e_4867_b481_e0efba9ff50a.slice - libcontainer container kubepods-besteffort-pode495de51_4d7e_4867_b481_e0efba9ff50a.slice. Jan 30 13:52:02.222125 systemd[1]: Created slice kubepods-besteffort-pod2e76c3ad_2358_49df_8767_24e970a1ef0c.slice - libcontainer container kubepods-besteffort-pod2e76c3ad_2358_49df_8767_24e970a1ef0c.slice. Jan 30 13:52:02.267934 kubelet[2510]: I0130 13:52:02.267881 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce9daa1b-6e98-4839-9f3b-00c7bb80d288-config-volume\") pod \"coredns-6f6b679f8f-hc2vh\" (UID: \"ce9daa1b-6e98-4839-9f3b-00c7bb80d288\") " pod="kube-system/coredns-6f6b679f8f-hc2vh" Jan 30 13:52:02.267934 kubelet[2510]: I0130 13:52:02.267931 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e495de51-4d7e-4867-b481-e0efba9ff50a-calico-apiserver-certs\") pod \"calico-apiserver-f996cd869-vf8nc\" (UID: \"e495de51-4d7e-4867-b481-e0efba9ff50a\") " pod="calico-apiserver/calico-apiserver-f996cd869-vf8nc" Jan 30 13:52:02.268106 kubelet[2510]: I0130 13:52:02.267951 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e76c3ad-2358-49df-8767-24e970a1ef0c-tigera-ca-bundle\") pod \"calico-kube-controllers-7f465bb95b-wgbwq\" (UID: \"2e76c3ad-2358-49df-8767-24e970a1ef0c\") " pod="calico-system/calico-kube-controllers-7f465bb95b-wgbwq" Jan 30 13:52:02.268106 kubelet[2510]: I0130 13:52:02.268024 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p47nc\" (UniqueName: \"kubernetes.io/projected/2e76c3ad-2358-49df-8767-24e970a1ef0c-kube-api-access-p47nc\") pod \"calico-kube-controllers-7f465bb95b-wgbwq\" (UID: \"2e76c3ad-2358-49df-8767-24e970a1ef0c\") " pod="calico-system/calico-kube-controllers-7f465bb95b-wgbwq" Jan 30 13:52:02.268106 kubelet[2510]: I0130 13:52:02.268045 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxtxk\" (UniqueName: \"kubernetes.io/projected/83cb02d6-877c-42c9-9de7-939337ef1dd0-kube-api-access-jxtxk\") pod \"coredns-6f6b679f8f-w6jzk\" (UID: \"83cb02d6-877c-42c9-9de7-939337ef1dd0\") " pod="kube-system/coredns-6f6b679f8f-w6jzk" Jan 30 13:52:02.268106 kubelet[2510]: I0130 13:52:02.268060 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frxsj\" (UniqueName: \"kubernetes.io/projected/e495de51-4d7e-4867-b481-e0efba9ff50a-kube-api-access-frxsj\") pod \"calico-apiserver-f996cd869-vf8nc\" (UID: \"e495de51-4d7e-4867-b481-e0efba9ff50a\") " pod="calico-apiserver/calico-apiserver-f996cd869-vf8nc" Jan 30 13:52:02.268106 kubelet[2510]: I0130 13:52:02.268075 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4ef025b4-c416-42de-a536-3742569ad063-calico-apiserver-certs\") pod \"calico-apiserver-f996cd869-ggsld\" (UID: \"4ef025b4-c416-42de-a536-3742569ad063\") " pod="calico-apiserver/calico-apiserver-f996cd869-ggsld" Jan 30 13:52:02.268231 kubelet[2510]: I0130 13:52:02.268091 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lh5r\" (UniqueName: \"kubernetes.io/projected/4ef025b4-c416-42de-a536-3742569ad063-kube-api-access-9lh5r\") pod \"calico-apiserver-f996cd869-ggsld\" (UID: \"4ef025b4-c416-42de-a536-3742569ad063\") " pod="calico-apiserver/calico-apiserver-f996cd869-ggsld" Jan 30 13:52:02.268231 kubelet[2510]: I0130 13:52:02.268128 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83cb02d6-877c-42c9-9de7-939337ef1dd0-config-volume\") pod \"coredns-6f6b679f8f-w6jzk\" (UID: \"83cb02d6-877c-42c9-9de7-939337ef1dd0\") " pod="kube-system/coredns-6f6b679f8f-w6jzk" Jan 30 13:52:02.268231 kubelet[2510]: I0130 13:52:02.268151 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9262\" (UniqueName: \"kubernetes.io/projected/ce9daa1b-6e98-4839-9f3b-00c7bb80d288-kube-api-access-h9262\") pod \"coredns-6f6b679f8f-hc2vh\" (UID: \"ce9daa1b-6e98-4839-9f3b-00c7bb80d288\") " pod="kube-system/coredns-6f6b679f8f-hc2vh" Jan 30 13:52:02.434614 containerd[1453]: time="2025-01-30T13:52:02.434520760Z" level=info msg="shim disconnected" id=eadd3a9275f9fd983d870965b78998ccb2a8c11a26f548e6892431af3552672d namespace=k8s.io Jan 30 13:52:02.434614 containerd[1453]: time="2025-01-30T13:52:02.434575865Z" level=warning msg="cleaning up after shim disconnected" id=eadd3a9275f9fd983d870965b78998ccb2a8c11a26f548e6892431af3552672d namespace=k8s.io Jan 30 13:52:02.434614 containerd[1453]: time="2025-01-30T13:52:02.434586454Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:02.508501 kubelet[2510]: E0130 13:52:02.508376 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:02.509584 containerd[1453]: time="2025-01-30T13:52:02.509550292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w6jzk,Uid:83cb02d6-877c-42c9-9de7-939337ef1dd0,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:02.513071 containerd[1453]: time="2025-01-30T13:52:02.513038152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f996cd869-ggsld,Uid:4ef025b4-c416-42de-a536-3742569ad063,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:52:02.517272 kubelet[2510]: E0130 13:52:02.517250 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:02.517551 containerd[1453]: time="2025-01-30T13:52:02.517528270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hc2vh,Uid:ce9daa1b-6e98-4839-9f3b-00c7bb80d288,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:02.520799 containerd[1453]: time="2025-01-30T13:52:02.520760198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f996cd869-vf8nc,Uid:e495de51-4d7e-4867-b481-e0efba9ff50a,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:52:02.524381 containerd[1453]: time="2025-01-30T13:52:02.524354096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f465bb95b-wgbwq,Uid:2e76c3ad-2358-49df-8767-24e970a1ef0c,Namespace:calico-system,Attempt:0,}" Jan 30 13:52:02.619514 systemd[1]: Created slice kubepods-besteffort-pod671740aa_5720_4131_9f78_6538b2c8e710.slice - libcontainer container kubepods-besteffort-pod671740aa_5720_4131_9f78_6538b2c8e710.slice. Jan 30 13:52:02.619943 containerd[1453]: time="2025-01-30T13:52:02.619824841Z" level=error msg="Failed to destroy network for sandbox \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.620554 containerd[1453]: time="2025-01-30T13:52:02.620519299Z" level=error msg="encountered an error cleaning up failed sandbox \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.620709 containerd[1453]: time="2025-01-30T13:52:02.620621642Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f996cd869-ggsld,Uid:4ef025b4-c416-42de-a536-3742569ad063,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.620911 kubelet[2510]: E0130 13:52:02.620882 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.620956 kubelet[2510]: E0130 13:52:02.620933 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f996cd869-ggsld" Jan 30 13:52:02.621029 kubelet[2510]: E0130 13:52:02.620954 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f996cd869-ggsld" Jan 30 13:52:02.621029 kubelet[2510]: E0130 13:52:02.620996 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f996cd869-ggsld_calico-apiserver(4ef025b4-c416-42de-a536-3742569ad063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f996cd869-ggsld_calico-apiserver(4ef025b4-c416-42de-a536-3742569ad063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f996cd869-ggsld" podUID="4ef025b4-c416-42de-a536-3742569ad063" Jan 30 13:52:02.622704 containerd[1453]: time="2025-01-30T13:52:02.622363653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lzkrl,Uid:671740aa-5720-4131-9f78-6538b2c8e710,Namespace:calico-system,Attempt:0,}" Jan 30 13:52:02.636172 containerd[1453]: time="2025-01-30T13:52:02.636120284Z" level=error msg="Failed to destroy network for sandbox \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.636945 containerd[1453]: time="2025-01-30T13:52:02.636910552Z" level=error msg="encountered an error cleaning up failed sandbox \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.639123 containerd[1453]: time="2025-01-30T13:52:02.637148350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w6jzk,Uid:83cb02d6-877c-42c9-9de7-939337ef1dd0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.639272 kubelet[2510]: E0130 13:52:02.637341 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.639272 kubelet[2510]: E0130 13:52:02.637398 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-w6jzk" Jan 30 13:52:02.639272 kubelet[2510]: E0130 13:52:02.637418 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-w6jzk" Jan 30 13:52:02.639466 kubelet[2510]: E0130 13:52:02.637457 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-w6jzk_kube-system(83cb02d6-877c-42c9-9de7-939337ef1dd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-w6jzk_kube-system(83cb02d6-877c-42c9-9de7-939337ef1dd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-w6jzk" podUID="83cb02d6-877c-42c9-9de7-939337ef1dd0" Jan 30 13:52:02.641810 containerd[1453]: time="2025-01-30T13:52:02.641699533Z" level=error msg="Failed to destroy network for sandbox \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.642174 containerd[1453]: time="2025-01-30T13:52:02.642143278Z" level=error msg="encountered an error cleaning up failed sandbox \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.642212 containerd[1453]: time="2025-01-30T13:52:02.642189025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hc2vh,Uid:ce9daa1b-6e98-4839-9f3b-00c7bb80d288,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.642354 kubelet[2510]: E0130 13:52:02.642330 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.642549 kubelet[2510]: E0130 13:52:02.642444 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hc2vh" Jan 30 13:52:02.642549 kubelet[2510]: E0130 13:52:02.642465 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hc2vh" Jan 30 13:52:02.642549 kubelet[2510]: E0130 13:52:02.642509 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hc2vh_kube-system(ce9daa1b-6e98-4839-9f3b-00c7bb80d288)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hc2vh_kube-system(ce9daa1b-6e98-4839-9f3b-00c7bb80d288)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hc2vh" podUID="ce9daa1b-6e98-4839-9f3b-00c7bb80d288" Jan 30 13:52:02.654862 containerd[1453]: time="2025-01-30T13:52:02.654810528Z" level=error msg="Failed to destroy network for sandbox \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.655275 containerd[1453]: time="2025-01-30T13:52:02.655241600Z" level=error msg="encountered an error cleaning up failed sandbox \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.655320 containerd[1453]: time="2025-01-30T13:52:02.655295611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f996cd869-vf8nc,Uid:e495de51-4d7e-4867-b481-e0efba9ff50a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.655560 kubelet[2510]: E0130 13:52:02.655520 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.655665 kubelet[2510]: E0130 13:52:02.655649 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f996cd869-vf8nc" Jan 30 13:52:02.656580 kubelet[2510]: E0130 13:52:02.655722 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f996cd869-vf8nc" Jan 30 13:52:02.656580 kubelet[2510]: E0130 13:52:02.655794 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f996cd869-vf8nc_calico-apiserver(e495de51-4d7e-4867-b481-e0efba9ff50a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f996cd869-vf8nc_calico-apiserver(e495de51-4d7e-4867-b481-e0efba9ff50a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f996cd869-vf8nc" podUID="e495de51-4d7e-4867-b481-e0efba9ff50a" Jan 30 13:52:02.659252 containerd[1453]: time="2025-01-30T13:52:02.659223350Z" level=error msg="Failed to destroy network for sandbox \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.659650 containerd[1453]: time="2025-01-30T13:52:02.659624325Z" level=error msg="encountered an error cleaning up failed sandbox \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.659808 containerd[1453]: time="2025-01-30T13:52:02.659760131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f465bb95b-wgbwq,Uid:2e76c3ad-2358-49df-8767-24e970a1ef0c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.660023 kubelet[2510]: E0130 13:52:02.659947 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.660023 kubelet[2510]: E0130 13:52:02.659992 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f465bb95b-wgbwq" Jan 30 13:52:02.660023 kubelet[2510]: E0130 13:52:02.660010 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f465bb95b-wgbwq" Jan 30 13:52:02.660132 kubelet[2510]: E0130 13:52:02.660054 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f465bb95b-wgbwq_calico-system(2e76c3ad-2358-49df-8767-24e970a1ef0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f465bb95b-wgbwq_calico-system(2e76c3ad-2358-49df-8767-24e970a1ef0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f465bb95b-wgbwq" podUID="2e76c3ad-2358-49df-8767-24e970a1ef0c" Jan 30 13:52:02.692563 containerd[1453]: time="2025-01-30T13:52:02.692494498Z" level=error msg="Failed to destroy network for sandbox \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.692966 containerd[1453]: time="2025-01-30T13:52:02.692933495Z" level=error msg="encountered an error cleaning up failed sandbox \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.693029 containerd[1453]: time="2025-01-30T13:52:02.693006452Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lzkrl,Uid:671740aa-5720-4131-9f78-6538b2c8e710,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.693278 kubelet[2510]: E0130 13:52:02.693230 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.693337 kubelet[2510]: E0130 13:52:02.693305 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lzkrl" Jan 30 13:52:02.693337 kubelet[2510]: E0130 13:52:02.693326 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lzkrl" Jan 30 13:52:02.693405 kubelet[2510]: E0130 13:52:02.693371 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lzkrl_calico-system(671740aa-5720-4131-9f78-6538b2c8e710)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lzkrl_calico-system(671740aa-5720-4131-9f78-6538b2c8e710)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lzkrl" podUID="671740aa-5720-4131-9f78-6538b2c8e710" Jan 30 13:52:02.718282 kubelet[2510]: I0130 13:52:02.718206 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:02.718904 containerd[1453]: time="2025-01-30T13:52:02.718838849Z" level=info msg="StopPodSandbox for \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\"" Jan 30 13:52:02.718944 kubelet[2510]: I0130 13:52:02.718904 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:02.719051 containerd[1453]: time="2025-01-30T13:52:02.719004530Z" level=info msg="Ensure that sandbox 45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582 in task-service has been cleanup successfully" Jan 30 13:52:02.719483 containerd[1453]: time="2025-01-30T13:52:02.719464547Z" level=info msg="StopPodSandbox for \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\"" Jan 30 13:52:02.719634 containerd[1453]: time="2025-01-30T13:52:02.719619068Z" level=info msg="Ensure that sandbox 16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa in task-service has been cleanup successfully" Jan 30 13:52:02.720454 kubelet[2510]: I0130 13:52:02.720440 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:02.721575 containerd[1453]: time="2025-01-30T13:52:02.721274867Z" level=info msg="StopPodSandbox for \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\"" Jan 30 13:52:02.721575 containerd[1453]: time="2025-01-30T13:52:02.721402628Z" level=info msg="Ensure that sandbox f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a in task-service has been cleanup successfully" Jan 30 13:52:02.721983 kubelet[2510]: I0130 13:52:02.721946 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:02.722345 containerd[1453]: time="2025-01-30T13:52:02.722307613Z" level=info msg="StopPodSandbox for \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\"" Jan 30 13:52:02.722519 containerd[1453]: time="2025-01-30T13:52:02.722486519Z" level=info msg="Ensure that sandbox 9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856 in task-service has been cleanup successfully" Jan 30 13:52:02.724633 kubelet[2510]: I0130 13:52:02.724524 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:02.725986 containerd[1453]: time="2025-01-30T13:52:02.725945414Z" level=info msg="StopPodSandbox for \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\"" Jan 30 13:52:02.726206 containerd[1453]: time="2025-01-30T13:52:02.726189454Z" level=info msg="Ensure that sandbox b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c in task-service has been cleanup successfully" Jan 30 13:52:02.729254 kubelet[2510]: E0130 13:52:02.729227 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:02.730184 containerd[1453]: time="2025-01-30T13:52:02.729980946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:52:02.731506 kubelet[2510]: I0130 13:52:02.731490 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:02.732296 containerd[1453]: time="2025-01-30T13:52:02.731936479Z" level=info msg="StopPodSandbox for \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\"" Jan 30 13:52:02.732296 containerd[1453]: time="2025-01-30T13:52:02.732085921Z" level=info msg="Ensure that sandbox 7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2 in task-service has been cleanup successfully" Jan 30 13:52:02.768432 containerd[1453]: time="2025-01-30T13:52:02.768286977Z" level=error msg="StopPodSandbox for \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\" failed" error="failed to destroy network for sandbox \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.768698 kubelet[2510]: E0130 13:52:02.768654 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:02.768806 kubelet[2510]: E0130 13:52:02.768711 2510 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a"} Jan 30 13:52:02.768806 kubelet[2510]: E0130 13:52:02.768776 2510 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"671740aa-5720-4131-9f78-6538b2c8e710\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:02.768893 kubelet[2510]: E0130 13:52:02.768800 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"671740aa-5720-4131-9f78-6538b2c8e710\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lzkrl" podUID="671740aa-5720-4131-9f78-6538b2c8e710" Jan 30 13:52:02.772905 containerd[1453]: time="2025-01-30T13:52:02.772864830Z" level=error msg="StopPodSandbox for \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\" failed" error="failed to destroy network for sandbox \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.776987 containerd[1453]: time="2025-01-30T13:52:02.776582201Z" level=error msg="StopPodSandbox for \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\" failed" error="failed to destroy network for sandbox \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.777097 kubelet[2510]: E0130 13:52:02.776674 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:02.777097 kubelet[2510]: E0130 13:52:02.776709 2510 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582"} Jan 30 13:52:02.777097 kubelet[2510]: E0130 13:52:02.776745 2510 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4ef025b4-c416-42de-a536-3742569ad063\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:02.777097 kubelet[2510]: E0130 13:52:02.776838 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4ef025b4-c416-42de-a536-3742569ad063\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f996cd869-ggsld" podUID="4ef025b4-c416-42de-a536-3742569ad063" Jan 30 13:52:02.777321 kubelet[2510]: E0130 13:52:02.776886 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:02.777321 kubelet[2510]: E0130 13:52:02.776905 2510 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856"} Jan 30 13:52:02.777321 kubelet[2510]: E0130 13:52:02.776921 2510 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e76c3ad-2358-49df-8767-24e970a1ef0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:02.777321 kubelet[2510]: E0130 13:52:02.776941 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e76c3ad-2358-49df-8767-24e970a1ef0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f465bb95b-wgbwq" podUID="2e76c3ad-2358-49df-8767-24e970a1ef0c" Jan 30 13:52:02.778925 containerd[1453]: time="2025-01-30T13:52:02.778899687Z" level=error msg="StopPodSandbox for \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\" failed" error="failed to destroy network for sandbox \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.779132 kubelet[2510]: E0130 13:52:02.779098 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:02.779224 kubelet[2510]: E0130 13:52:02.779209 2510 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa"} Jan 30 13:52:02.779317 kubelet[2510]: E0130 13:52:02.779277 2510 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83cb02d6-877c-42c9-9de7-939337ef1dd0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:02.779317 kubelet[2510]: E0130 13:52:02.779298 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83cb02d6-877c-42c9-9de7-939337ef1dd0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-w6jzk" podUID="83cb02d6-877c-42c9-9de7-939337ef1dd0" Jan 30 13:52:02.785568 containerd[1453]: time="2025-01-30T13:52:02.785533562Z" level=error msg="StopPodSandbox for \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\" failed" error="failed to destroy network for sandbox \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.786828 kubelet[2510]: E0130 13:52:02.785697 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:02.786884 kubelet[2510]: E0130 13:52:02.786850 2510 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2"} Jan 30 13:52:02.786910 kubelet[2510]: E0130 13:52:02.786882 2510 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e495de51-4d7e-4867-b481-e0efba9ff50a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:02.786967 kubelet[2510]: E0130 13:52:02.786902 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e495de51-4d7e-4867-b481-e0efba9ff50a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f996cd869-vf8nc" podUID="e495de51-4d7e-4867-b481-e0efba9ff50a" Jan 30 13:52:02.788800 containerd[1453]: time="2025-01-30T13:52:02.788742746Z" level=error msg="StopPodSandbox for \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\" failed" error="failed to destroy network for sandbox \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.788966 kubelet[2510]: E0130 13:52:02.788931 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:02.788966 kubelet[2510]: E0130 13:52:02.788964 2510 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c"} Jan 30 13:52:02.789041 kubelet[2510]: E0130 13:52:02.788990 2510 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce9daa1b-6e98-4839-9f3b-00c7bb80d288\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:02.789041 kubelet[2510]: E0130 13:52:02.789009 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce9daa1b-6e98-4839-9f3b-00c7bb80d288\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hc2vh" podUID="ce9daa1b-6e98-4839-9f3b-00c7bb80d288" Jan 30 13:52:06.695072 systemd[1]: Started sshd@9-10.0.0.119:22-10.0.0.1:48656.service - OpenSSH per-connection server daemon (10.0.0.1:48656). Jan 30 13:52:06.735114 sshd[3674]: Accepted publickey for core from 10.0.0.1 port 48656 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:06.737922 sshd[3674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:06.744658 systemd-logind[1441]: New session 10 of user core. Jan 30 13:52:06.754044 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:52:06.885294 sshd[3674]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:06.890939 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:52:06.891328 systemd[1]: sshd@9-10.0.0.119:22-10.0.0.1:48656.service: Deactivated successfully. Jan 30 13:52:06.893520 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:52:06.895103 systemd-logind[1441]: Removed session 10. Jan 30 13:52:08.045067 kubelet[2510]: I0130 13:52:08.045005 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:52:08.045758 kubelet[2510]: E0130 13:52:08.045347 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:08.160159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548949559.mount: Deactivated successfully. Jan 30 13:52:08.744027 kubelet[2510]: E0130 13:52:08.743997 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:08.755738 containerd[1453]: time="2025-01-30T13:52:08.755695808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:08.759831 containerd[1453]: time="2025-01-30T13:52:08.759787625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:52:08.762350 containerd[1453]: time="2025-01-30T13:52:08.762310671Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:08.764584 containerd[1453]: time="2025-01-30T13:52:08.764537970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:08.765445 containerd[1453]: time="2025-01-30T13:52:08.765360537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.035339827s" Jan 30 13:52:08.774550 containerd[1453]: time="2025-01-30T13:52:08.774198470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:52:08.794091 containerd[1453]: time="2025-01-30T13:52:08.794034961Z" level=info msg="CreateContainer within sandbox \"bf3ef58a566962cb7ac3d718626b831b746c206f168e594c3ad2df776dbebf6a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:52:08.813896 containerd[1453]: time="2025-01-30T13:52:08.813852627Z" level=info msg="CreateContainer within sandbox \"bf3ef58a566962cb7ac3d718626b831b746c206f168e594c3ad2df776dbebf6a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ad64184043ef7763c8484deb5ab9971116646c7f74b022185181f356e681c8c6\"" Jan 30 13:52:08.814836 containerd[1453]: time="2025-01-30T13:52:08.814299949Z" level=info msg="StartContainer for \"ad64184043ef7763c8484deb5ab9971116646c7f74b022185181f356e681c8c6\"" Jan 30 13:52:08.881005 systemd[1]: Started cri-containerd-ad64184043ef7763c8484deb5ab9971116646c7f74b022185181f356e681c8c6.scope - libcontainer container ad64184043ef7763c8484deb5ab9971116646c7f74b022185181f356e681c8c6. Jan 30 13:52:09.100019 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:52:09.100182 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:52:09.111372 containerd[1453]: time="2025-01-30T13:52:09.111330320Z" level=info msg="StartContainer for \"ad64184043ef7763c8484deb5ab9971116646c7f74b022185181f356e681c8c6\" returns successfully" Jan 30 13:52:09.747818 kubelet[2510]: E0130 13:52:09.747354 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:09.760666 kubelet[2510]: I0130 13:52:09.760598 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r8ll8" podStartSLOduration=2.284416512 podStartE2EDuration="21.760580585s" podCreationTimestamp="2025-01-30 13:51:48 +0000 UTC" firstStartedPulling="2025-01-30 13:51:49.31001367 +0000 UTC m=+13.791070885" lastFinishedPulling="2025-01-30 13:52:08.786177743 +0000 UTC m=+33.267234958" observedRunningTime="2025-01-30 13:52:09.760287113 +0000 UTC m=+34.241344328" watchObservedRunningTime="2025-01-30 13:52:09.760580585 +0000 UTC m=+34.241637800" Jan 30 13:52:10.725879 kernel: bpftool[3912]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:52:10.750151 kubelet[2510]: E0130 13:52:10.750104 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:10.993439 systemd-networkd[1401]: vxlan.calico: Link UP Jan 30 13:52:10.993452 systemd-networkd[1401]: vxlan.calico: Gained carrier Jan 30 13:52:11.899733 systemd[1]: Started sshd@10-10.0.0.119:22-10.0.0.1:43102.service - OpenSSH per-connection server daemon (10.0.0.1:43102). Jan 30 13:52:11.939447 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 43102 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:11.941100 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:11.945392 systemd-logind[1441]: New session 11 of user core. Jan 30 13:52:11.956885 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:52:12.087473 sshd[4007]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:12.091980 systemd[1]: sshd@10-10.0.0.119:22-10.0.0.1:43102.service: Deactivated successfully. Jan 30 13:52:12.094396 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:52:12.095231 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:52:12.096402 systemd-logind[1441]: Removed session 11. Jan 30 13:52:13.020013 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL Jan 30 13:52:13.611090 containerd[1453]: time="2025-01-30T13:52:13.610984288Z" level=info msg="StopPodSandbox for \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\"" Jan 30 13:52:13.611647 containerd[1453]: time="2025-01-30T13:52:13.611154458Z" level=info msg="StopPodSandbox for \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\"" Jan 30 13:52:13.761711 containerd[1453]: 2025-01-30 13:52:13.695 [INFO][4055] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:13.761711 containerd[1453]: 2025-01-30 13:52:13.696 [INFO][4055] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" iface="eth0" netns="/var/run/netns/cni-c68887e2-72d2-28f3-f989-c5cde87ae607" Jan 30 13:52:13.761711 containerd[1453]: 2025-01-30 13:52:13.696 [INFO][4055] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" iface="eth0" netns="/var/run/netns/cni-c68887e2-72d2-28f3-f989-c5cde87ae607" Jan 30 13:52:13.761711 containerd[1453]: 2025-01-30 13:52:13.697 [INFO][4055] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" iface="eth0" netns="/var/run/netns/cni-c68887e2-72d2-28f3-f989-c5cde87ae607" Jan 30 13:52:13.761711 containerd[1453]: 2025-01-30 13:52:13.697 [INFO][4055] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:13.761711 containerd[1453]: 2025-01-30 13:52:13.697 [INFO][4055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:13.761711 containerd[1453]: 2025-01-30 13:52:13.749 [INFO][4070] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" HandleID="k8s-pod-network.f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Workload="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:13.761711 containerd[1453]: 2025-01-30 13:52:13.749 [INFO][4070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:13.761711 containerd[1453]: 2025-01-30 13:52:13.749 [INFO][4070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:13.761711 containerd[1453]: 2025-01-30 13:52:13.756 [WARNING][4070] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" HandleID="k8s-pod-network.f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Workload="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:13.761711 containerd[1453]: 2025-01-30 13:52:13.756 [INFO][4070] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" HandleID="k8s-pod-network.f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Workload="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:13.761711 containerd[1453]: 2025-01-30 13:52:13.757 [INFO][4070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:13.761711 containerd[1453]: 2025-01-30 13:52:13.759 [INFO][4055] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:13.762141 containerd[1453]: time="2025-01-30T13:52:13.761948719Z" level=info msg="TearDown network for sandbox \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\" successfully" Jan 30 13:52:13.762141 containerd[1453]: time="2025-01-30T13:52:13.761980779Z" level=info msg="StopPodSandbox for \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\" returns successfully" Jan 30 13:52:13.763251 containerd[1453]: time="2025-01-30T13:52:13.763210851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lzkrl,Uid:671740aa-5720-4131-9f78-6538b2c8e710,Namespace:calico-system,Attempt:1,}" Jan 30 13:52:13.765030 systemd[1]: run-netns-cni\x2dc68887e2\x2d72d2\x2d28f3\x2df989\x2dc5cde87ae607.mount: Deactivated successfully. Jan 30 13:52:13.770168 containerd[1453]: 2025-01-30 13:52:13.696 [INFO][4054] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:13.770168 containerd[1453]: 2025-01-30 13:52:13.697 [INFO][4054] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" iface="eth0" netns="/var/run/netns/cni-47c02859-c063-5496-74bf-51a34a8e5d72" Jan 30 13:52:13.770168 containerd[1453]: 2025-01-30 13:52:13.697 [INFO][4054] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" iface="eth0" netns="/var/run/netns/cni-47c02859-c063-5496-74bf-51a34a8e5d72" Jan 30 13:52:13.770168 containerd[1453]: 2025-01-30 13:52:13.697 [INFO][4054] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" iface="eth0" netns="/var/run/netns/cni-47c02859-c063-5496-74bf-51a34a8e5d72" Jan 30 13:52:13.770168 containerd[1453]: 2025-01-30 13:52:13.698 [INFO][4054] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:13.770168 containerd[1453]: 2025-01-30 13:52:13.698 [INFO][4054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:13.770168 containerd[1453]: 2025-01-30 13:52:13.749 [INFO][4071] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" HandleID="k8s-pod-network.7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Workload="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:13.770168 containerd[1453]: 2025-01-30 13:52:13.749 [INFO][4071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:13.770168 containerd[1453]: 2025-01-30 13:52:13.757 [INFO][4071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:13.770168 containerd[1453]: 2025-01-30 13:52:13.763 [WARNING][4071] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" HandleID="k8s-pod-network.7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Workload="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:13.770168 containerd[1453]: 2025-01-30 13:52:13.763 [INFO][4071] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" HandleID="k8s-pod-network.7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Workload="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:13.770168 containerd[1453]: 2025-01-30 13:52:13.765 [INFO][4071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:13.770168 containerd[1453]: 2025-01-30 13:52:13.767 [INFO][4054] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:13.771822 containerd[1453]: time="2025-01-30T13:52:13.770309843Z" level=info msg="TearDown network for sandbox \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\" successfully" Jan 30 13:52:13.771822 containerd[1453]: time="2025-01-30T13:52:13.770333467Z" level=info msg="StopPodSandbox for \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\" returns successfully" Jan 30 13:52:13.771822 containerd[1453]: time="2025-01-30T13:52:13.771216166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f996cd869-vf8nc,Uid:e495de51-4d7e-4867-b481-e0efba9ff50a,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:52:13.772889 systemd[1]: run-netns-cni\x2d47c02859\x2dc063\x2d5496\x2d74bf\x2d51a34a8e5d72.mount: Deactivated successfully. Jan 30 13:52:13.888007 systemd-networkd[1401]: cali0e02888b025: Link UP Jan 30 13:52:13.888803 systemd-networkd[1401]: cali0e02888b025: Gained carrier Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.824 [INFO][4086] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lzkrl-eth0 csi-node-driver- calico-system 671740aa-5720-4131-9f78-6538b2c8e710 882 0 2025-01-30 13:51:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lzkrl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0e02888b025 [] []}} ContainerID="34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" Namespace="calico-system" Pod="csi-node-driver-lzkrl" WorkloadEndpoint="localhost-k8s-csi--node--driver--lzkrl-" Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.824 [INFO][4086] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" Namespace="calico-system" Pod="csi-node-driver-lzkrl" WorkloadEndpoint="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.851 [INFO][4113] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" HandleID="k8s-pod-network.34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" Workload="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.859 [INFO][4113] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" HandleID="k8s-pod-network.34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" Workload="localhost-k8s-csi--node--driver--lzkrl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000390020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lzkrl", "timestamp":"2025-01-30 13:52:13.851516471 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.859 [INFO][4113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.859 [INFO][4113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.859 [INFO][4113] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.861 [INFO][4113] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" host="localhost" Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.866 [INFO][4113] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.869 [INFO][4113] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.871 [INFO][4113] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.872 [INFO][4113] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.872 [INFO][4113] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" host="localhost" Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.873 [INFO][4113] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.877 [INFO][4113] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" host="localhost" Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.881 [INFO][4113] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" host="localhost" Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.882 [INFO][4113] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" host="localhost" Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.882 [INFO][4113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:13.900622 containerd[1453]: 2025-01-30 13:52:13.882 [INFO][4113] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" HandleID="k8s-pod-network.34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" Workload="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:13.901984 containerd[1453]: 2025-01-30 13:52:13.884 [INFO][4086] cni-plugin/k8s.go 386: Populated endpoint ContainerID="34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" Namespace="calico-system" Pod="csi-node-driver-lzkrl" WorkloadEndpoint="localhost-k8s-csi--node--driver--lzkrl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lzkrl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"671740aa-5720-4131-9f78-6538b2c8e710", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lzkrl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e02888b025", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:13.901984 containerd[1453]: 2025-01-30 13:52:13.884 [INFO][4086] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" Namespace="calico-system" Pod="csi-node-driver-lzkrl" WorkloadEndpoint="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:13.901984 containerd[1453]: 2025-01-30 13:52:13.884 [INFO][4086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e02888b025 ContainerID="34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" Namespace="calico-system" Pod="csi-node-driver-lzkrl" WorkloadEndpoint="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:13.901984 containerd[1453]: 2025-01-30 13:52:13.888 [INFO][4086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" Namespace="calico-system" Pod="csi-node-driver-lzkrl" WorkloadEndpoint="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:13.901984 containerd[1453]: 2025-01-30 13:52:13.888 [INFO][4086] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" Namespace="calico-system" Pod="csi-node-driver-lzkrl" WorkloadEndpoint="localhost-k8s-csi--node--driver--lzkrl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lzkrl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"671740aa-5720-4131-9f78-6538b2c8e710", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a", Pod:"csi-node-driver-lzkrl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e02888b025", MAC:"f6:00:9f:f9:2b:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:13.901984 containerd[1453]: 2025-01-30 13:52:13.897 [INFO][4086] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a" Namespace="calico-system" Pod="csi-node-driver-lzkrl" WorkloadEndpoint="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:13.932142 containerd[1453]: time="2025-01-30T13:52:13.932025018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:13.932293 containerd[1453]: time="2025-01-30T13:52:13.932142689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:13.932293 containerd[1453]: time="2025-01-30T13:52:13.932170933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:13.932348 containerd[1453]: time="2025-01-30T13:52:13.932298111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:13.953974 systemd[1]: Started cri-containerd-34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a.scope - libcontainer container 34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a. Jan 30 13:52:13.966889 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:13.995420 systemd-networkd[1401]: cali617cc4b5339: Link UP Jan 30 13:52:13.995821 containerd[1453]: time="2025-01-30T13:52:13.995483371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lzkrl,Uid:671740aa-5720-4131-9f78-6538b2c8e710,Namespace:calico-system,Attempt:1,} returns sandbox id \"34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a\"" Jan 30 13:52:13.995827 systemd-networkd[1401]: cali617cc4b5339: Gained carrier Jan 30 13:52:13.998789 containerd[1453]: time="2025-01-30T13:52:13.997525899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.825 [INFO][4097] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0 calico-apiserver-f996cd869- calico-apiserver e495de51-4d7e-4867-b481-e0efba9ff50a 883 0 2025-01-30 13:51:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f996cd869 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f996cd869-vf8nc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali617cc4b5339 [] []}} ContainerID="04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-vf8nc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--vf8nc-" Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.825 [INFO][4097] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-vf8nc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.851 [INFO][4114] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" HandleID="k8s-pod-network.04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" Workload="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.859 [INFO][4114] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" HandleID="k8s-pod-network.04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" Workload="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd210), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f996cd869-vf8nc", "timestamp":"2025-01-30 13:52:13.851403589 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.859 [INFO][4114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.882 [INFO][4114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.882 [INFO][4114] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.963 [INFO][4114] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" host="localhost" Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.968 [INFO][4114] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.972 [INFO][4114] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.974 [INFO][4114] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.976 [INFO][4114] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.976 [INFO][4114] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" host="localhost" Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.978 [INFO][4114] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.982 [INFO][4114] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" host="localhost" Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.989 [INFO][4114] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" host="localhost" Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.989 [INFO][4114] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" host="localhost" Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.989 [INFO][4114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:14.010437 containerd[1453]: 2025-01-30 13:52:13.989 [INFO][4114] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" HandleID="k8s-pod-network.04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" Workload="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:14.011310 containerd[1453]: 2025-01-30 13:52:13.992 [INFO][4097] cni-plugin/k8s.go 386: Populated endpoint ContainerID="04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-vf8nc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0", GenerateName:"calico-apiserver-f996cd869-", Namespace:"calico-apiserver", SelfLink:"", UID:"e495de51-4d7e-4867-b481-e0efba9ff50a", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f996cd869", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f996cd869-vf8nc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali617cc4b5339", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:14.011310 containerd[1453]: 2025-01-30 13:52:13.992 [INFO][4097] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-vf8nc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:14.011310 containerd[1453]: 2025-01-30 13:52:13.992 [INFO][4097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali617cc4b5339 ContainerID="04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-vf8nc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:14.011310 containerd[1453]: 2025-01-30 13:52:13.996 [INFO][4097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-vf8nc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:14.011310 containerd[1453]: 2025-01-30 13:52:13.997 [INFO][4097] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-vf8nc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0", GenerateName:"calico-apiserver-f996cd869-", Namespace:"calico-apiserver", SelfLink:"", UID:"e495de51-4d7e-4867-b481-e0efba9ff50a", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f996cd869", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e", Pod:"calico-apiserver-f996cd869-vf8nc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali617cc4b5339", MAC:"a2:d0:61:aa:88:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:14.011310 containerd[1453]: 2025-01-30 13:52:14.007 [INFO][4097] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-vf8nc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:14.032668 containerd[1453]: time="2025-01-30T13:52:14.032513086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:14.032668 containerd[1453]: time="2025-01-30T13:52:14.032631979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:14.032668 containerd[1453]: time="2025-01-30T13:52:14.032653470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:14.032979 containerd[1453]: time="2025-01-30T13:52:14.032741416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:14.053945 systemd[1]: Started cri-containerd-04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e.scope - libcontainer container 04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e. Jan 30 13:52:14.065865 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:14.090828 containerd[1453]: time="2025-01-30T13:52:14.090753858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f996cd869-vf8nc,Uid:e495de51-4d7e-4867-b481-e0efba9ff50a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e\"" Jan 30 13:52:14.611357 containerd[1453]: time="2025-01-30T13:52:14.611311717Z" level=info msg="StopPodSandbox for \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\"" Jan 30 13:52:14.684037 containerd[1453]: 2025-01-30 13:52:14.651 [INFO][4258] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:14.684037 containerd[1453]: 2025-01-30 13:52:14.651 [INFO][4258] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" iface="eth0" netns="/var/run/netns/cni-b1044cf0-e66f-051a-6295-b8fc91018c55" Jan 30 13:52:14.684037 containerd[1453]: 2025-01-30 13:52:14.651 [INFO][4258] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" iface="eth0" netns="/var/run/netns/cni-b1044cf0-e66f-051a-6295-b8fc91018c55" Jan 30 13:52:14.684037 containerd[1453]: 2025-01-30 13:52:14.652 [INFO][4258] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" iface="eth0" netns="/var/run/netns/cni-b1044cf0-e66f-051a-6295-b8fc91018c55" Jan 30 13:52:14.684037 containerd[1453]: 2025-01-30 13:52:14.652 [INFO][4258] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:14.684037 containerd[1453]: 2025-01-30 13:52:14.652 [INFO][4258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:14.684037 containerd[1453]: 2025-01-30 13:52:14.672 [INFO][4266] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" HandleID="k8s-pod-network.b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Workload="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:14.684037 containerd[1453]: 2025-01-30 13:52:14.672 [INFO][4266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:14.684037 containerd[1453]: 2025-01-30 13:52:14.672 [INFO][4266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:14.684037 containerd[1453]: 2025-01-30 13:52:14.677 [WARNING][4266] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" HandleID="k8s-pod-network.b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Workload="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:14.684037 containerd[1453]: 2025-01-30 13:52:14.677 [INFO][4266] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" HandleID="k8s-pod-network.b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Workload="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:14.684037 containerd[1453]: 2025-01-30 13:52:14.679 [INFO][4266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:14.684037 containerd[1453]: 2025-01-30 13:52:14.681 [INFO][4258] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:14.684486 containerd[1453]: time="2025-01-30T13:52:14.684224801Z" level=info msg="TearDown network for sandbox \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\" successfully" Jan 30 13:52:14.684486 containerd[1453]: time="2025-01-30T13:52:14.684267291Z" level=info msg="StopPodSandbox for \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\" returns successfully" Jan 30 13:52:14.684683 kubelet[2510]: E0130 13:52:14.684644 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:14.685735 containerd[1453]: time="2025-01-30T13:52:14.685391513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hc2vh,Uid:ce9daa1b-6e98-4839-9f3b-00c7bb80d288,Namespace:kube-system,Attempt:1,}" Jan 30 13:52:14.772923 systemd[1]: run-netns-cni\x2db1044cf0\x2de66f\x2d051a\x2d6295\x2db8fc91018c55.mount: Deactivated successfully. Jan 30 13:52:14.822759 systemd-networkd[1401]: cali33c093dbbde: Link UP Jan 30 13:52:14.822980 systemd-networkd[1401]: cali33c093dbbde: Gained carrier Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.755 [INFO][4274] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0 coredns-6f6b679f8f- kube-system ce9daa1b-6e98-4839-9f3b-00c7bb80d288 897 0 2025-01-30 13:51:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-hc2vh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali33c093dbbde [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hc2vh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hc2vh-" Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.755 [INFO][4274] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hc2vh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.782 [INFO][4287] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" HandleID="k8s-pod-network.fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" Workload="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.791 [INFO][4287] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" HandleID="k8s-pod-network.fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" Workload="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050e40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-hc2vh", "timestamp":"2025-01-30 13:52:14.782714777 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.791 [INFO][4287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.791 [INFO][4287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.791 [INFO][4287] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.793 [INFO][4287] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" host="localhost" Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.797 [INFO][4287] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.801 [INFO][4287] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.803 [INFO][4287] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.805 [INFO][4287] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.805 [INFO][4287] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" host="localhost" Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.806 [INFO][4287] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.811 [INFO][4287] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" host="localhost" Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.817 [INFO][4287] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" host="localhost" Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.817 [INFO][4287] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" host="localhost" Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.817 [INFO][4287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:14.835722 containerd[1453]: 2025-01-30 13:52:14.817 [INFO][4287] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" HandleID="k8s-pod-network.fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" Workload="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:14.836578 containerd[1453]: 2025-01-30 13:52:14.820 [INFO][4274] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hc2vh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ce9daa1b-6e98-4839-9f3b-00c7bb80d288", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-hc2vh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33c093dbbde", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:14.836578 containerd[1453]: 2025-01-30 13:52:14.820 [INFO][4274] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hc2vh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:14.836578 containerd[1453]: 2025-01-30 13:52:14.820 [INFO][4274] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33c093dbbde ContainerID="fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hc2vh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:14.836578 containerd[1453]: 2025-01-30 13:52:14.823 [INFO][4274] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hc2vh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:14.836578 containerd[1453]: 2025-01-30 13:52:14.824 [INFO][4274] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hc2vh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ce9daa1b-6e98-4839-9f3b-00c7bb80d288", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd", Pod:"coredns-6f6b679f8f-hc2vh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33c093dbbde", MAC:"82:89:a0:4e:66:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:14.836578 containerd[1453]: 2025-01-30 13:52:14.832 [INFO][4274] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hc2vh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:14.860715 containerd[1453]: time="2025-01-30T13:52:14.860612397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:14.860715 containerd[1453]: time="2025-01-30T13:52:14.860693860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:14.860715 containerd[1453]: time="2025-01-30T13:52:14.860708027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:14.861003 containerd[1453]: time="2025-01-30T13:52:14.860827400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:14.884942 systemd[1]: Started cri-containerd-fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd.scope - libcontainer container fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd. Jan 30 13:52:14.896351 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:14.918805 containerd[1453]: time="2025-01-30T13:52:14.918739406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hc2vh,Uid:ce9daa1b-6e98-4839-9f3b-00c7bb80d288,Namespace:kube-system,Attempt:1,} returns sandbox id \"fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd\"" Jan 30 13:52:14.919454 kubelet[2510]: E0130 13:52:14.919429 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:14.921402 containerd[1453]: time="2025-01-30T13:52:14.921370420Z" level=info msg="CreateContainer within sandbox \"fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:52:14.938546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3858389564.mount: Deactivated successfully. Jan 30 13:52:14.943836 containerd[1453]: time="2025-01-30T13:52:14.943789548Z" level=info msg="CreateContainer within sandbox \"fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32702be4cc5e5764336e66d47198266fe4e5c73ba223c67b95ccff7473bceb27\"" Jan 30 13:52:14.944316 containerd[1453]: time="2025-01-30T13:52:14.944291501Z" level=info msg="StartContainer for \"32702be4cc5e5764336e66d47198266fe4e5c73ba223c67b95ccff7473bceb27\"" Jan 30 13:52:14.973956 systemd[1]: Started cri-containerd-32702be4cc5e5764336e66d47198266fe4e5c73ba223c67b95ccff7473bceb27.scope - libcontainer container 32702be4cc5e5764336e66d47198266fe4e5c73ba223c67b95ccff7473bceb27. Jan 30 13:52:15.001015 containerd[1453]: time="2025-01-30T13:52:15.000974957Z" level=info msg="StartContainer for \"32702be4cc5e5764336e66d47198266fe4e5c73ba223c67b95ccff7473bceb27\" returns successfully" Jan 30 13:52:15.195943 systemd-networkd[1401]: cali0e02888b025: Gained IPv6LL Jan 30 13:52:15.323975 systemd-networkd[1401]: cali617cc4b5339: Gained IPv6LL Jan 30 13:52:15.606487 containerd[1453]: time="2025-01-30T13:52:15.606432491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:15.607388 containerd[1453]: time="2025-01-30T13:52:15.607351628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:52:15.609050 containerd[1453]: time="2025-01-30T13:52:15.609018690Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:15.611712 containerd[1453]: time="2025-01-30T13:52:15.611509961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:15.613021 containerd[1453]: time="2025-01-30T13:52:15.612295598Z" level=info msg="StopPodSandbox for \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\"" Jan 30 13:52:15.613080 containerd[1453]: time="2025-01-30T13:52:15.613024767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.615462389s" Jan 30 13:52:15.613080 containerd[1453]: time="2025-01-30T13:52:15.613054663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:52:15.615598 containerd[1453]: time="2025-01-30T13:52:15.615565992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:52:15.616082 containerd[1453]: time="2025-01-30T13:52:15.616035604Z" level=info msg="CreateContainer within sandbox \"34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:52:15.637723 containerd[1453]: time="2025-01-30T13:52:15.637126763Z" level=info msg="CreateContainer within sandbox \"34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"762557a56ac86a20eac56485d94995515eedd7849f62f8a70ba8a22292e1e73c\"" Jan 30 13:52:15.637978 containerd[1453]: time="2025-01-30T13:52:15.637938708Z" level=info msg="StartContainer for \"762557a56ac86a20eac56485d94995515eedd7849f62f8a70ba8a22292e1e73c\"" Jan 30 13:52:15.707042 systemd[1]: Started cri-containerd-762557a56ac86a20eac56485d94995515eedd7849f62f8a70ba8a22292e1e73c.scope - libcontainer container 762557a56ac86a20eac56485d94995515eedd7849f62f8a70ba8a22292e1e73c. Jan 30 13:52:15.708603 containerd[1453]: 2025-01-30 13:52:15.661 [INFO][4407] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:15.708603 containerd[1453]: 2025-01-30 13:52:15.664 [INFO][4407] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" iface="eth0" netns="/var/run/netns/cni-8692a4a1-a0b9-519d-8fa7-7c03529b8642" Jan 30 13:52:15.708603 containerd[1453]: 2025-01-30 13:52:15.665 [INFO][4407] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" iface="eth0" netns="/var/run/netns/cni-8692a4a1-a0b9-519d-8fa7-7c03529b8642" Jan 30 13:52:15.708603 containerd[1453]: 2025-01-30 13:52:15.665 [INFO][4407] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" iface="eth0" netns="/var/run/netns/cni-8692a4a1-a0b9-519d-8fa7-7c03529b8642" Jan 30 13:52:15.708603 containerd[1453]: 2025-01-30 13:52:15.665 [INFO][4407] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:15.708603 containerd[1453]: 2025-01-30 13:52:15.665 [INFO][4407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:15.708603 containerd[1453]: 2025-01-30 13:52:15.694 [INFO][4415] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" HandleID="k8s-pod-network.16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Workload="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:15.708603 containerd[1453]: 2025-01-30 13:52:15.694 [INFO][4415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:15.708603 containerd[1453]: 2025-01-30 13:52:15.694 [INFO][4415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:15.708603 containerd[1453]: 2025-01-30 13:52:15.701 [WARNING][4415] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" HandleID="k8s-pod-network.16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Workload="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:15.708603 containerd[1453]: 2025-01-30 13:52:15.701 [INFO][4415] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" HandleID="k8s-pod-network.16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Workload="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:15.708603 containerd[1453]: 2025-01-30 13:52:15.702 [INFO][4415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:15.708603 containerd[1453]: 2025-01-30 13:52:15.705 [INFO][4407] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:15.709617 containerd[1453]: time="2025-01-30T13:52:15.709573483Z" level=info msg="TearDown network for sandbox \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\" successfully" Jan 30 13:52:15.709681 containerd[1453]: time="2025-01-30T13:52:15.709616744Z" level=info msg="StopPodSandbox for \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\" returns successfully" Jan 30 13:52:15.709962 kubelet[2510]: E0130 13:52:15.709933 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:15.710927 containerd[1453]: time="2025-01-30T13:52:15.710900126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w6jzk,Uid:83cb02d6-877c-42c9-9de7-939337ef1dd0,Namespace:kube-system,Attempt:1,}" Jan 30 13:52:15.744131 containerd[1453]: time="2025-01-30T13:52:15.744076991Z" level=info msg="StartContainer for \"762557a56ac86a20eac56485d94995515eedd7849f62f8a70ba8a22292e1e73c\" returns successfully" Jan 30 13:52:15.772966 systemd[1]: run-netns-cni\x2d8692a4a1\x2da0b9\x2d519d\x2d8fa7\x2d7c03529b8642.mount: Deactivated successfully. Jan 30 13:52:15.780580 kubelet[2510]: E0130 13:52:15.780267 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:15.810729 kubelet[2510]: I0130 13:52:15.810655 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hc2vh" podStartSLOduration=32.810603108 podStartE2EDuration="32.810603108s" podCreationTimestamp="2025-01-30 13:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:15.79311028 +0000 UTC m=+40.274167495" watchObservedRunningTime="2025-01-30 13:52:15.810603108 +0000 UTC m=+40.291660323" Jan 30 13:52:15.950224 systemd-networkd[1401]: calid3b5dfbd275: Link UP Jan 30 13:52:15.950882 systemd-networkd[1401]: calid3b5dfbd275: Gained carrier Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.766 [INFO][4448] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0 coredns-6f6b679f8f- kube-system 83cb02d6-877c-42c9-9de7-939337ef1dd0 910 0 2025-01-30 13:51:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-w6jzk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid3b5dfbd275 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" Namespace="kube-system" Pod="coredns-6f6b679f8f-w6jzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w6jzk-" Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.766 [INFO][4448] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" Namespace="kube-system" Pod="coredns-6f6b679f8f-w6jzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.805 [INFO][4475] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" HandleID="k8s-pod-network.cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" Workload="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.920 [INFO][4475] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" HandleID="k8s-pod-network.cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" Workload="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd190), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-w6jzk", "timestamp":"2025-01-30 13:52:15.805039434 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.920 [INFO][4475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.920 [INFO][4475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.920 [INFO][4475] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.922 [INFO][4475] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" host="localhost" Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.926 [INFO][4475] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.930 [INFO][4475] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.932 [INFO][4475] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.934 [INFO][4475] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.934 [INFO][4475] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" host="localhost" Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.936 [INFO][4475] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89 Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.940 [INFO][4475] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" host="localhost" Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.945 [INFO][4475] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" host="localhost" Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.945 [INFO][4475] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" host="localhost" Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.945 [INFO][4475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:15.964294 containerd[1453]: 2025-01-30 13:52:15.945 [INFO][4475] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" HandleID="k8s-pod-network.cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" Workload="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:15.964888 containerd[1453]: 2025-01-30 13:52:15.948 [INFO][4448] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" Namespace="kube-system" Pod="coredns-6f6b679f8f-w6jzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"83cb02d6-877c-42c9-9de7-939337ef1dd0", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-w6jzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3b5dfbd275", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:15.964888 containerd[1453]: 2025-01-30 13:52:15.948 [INFO][4448] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" Namespace="kube-system" Pod="coredns-6f6b679f8f-w6jzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:15.964888 containerd[1453]: 2025-01-30 13:52:15.948 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3b5dfbd275 ContainerID="cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" Namespace="kube-system" Pod="coredns-6f6b679f8f-w6jzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:15.964888 containerd[1453]: 2025-01-30 13:52:15.950 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" Namespace="kube-system" Pod="coredns-6f6b679f8f-w6jzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:15.964888 containerd[1453]: 2025-01-30 13:52:15.950 [INFO][4448] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" Namespace="kube-system" Pod="coredns-6f6b679f8f-w6jzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"83cb02d6-877c-42c9-9de7-939337ef1dd0", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89", Pod:"coredns-6f6b679f8f-w6jzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3b5dfbd275", MAC:"22:a9:4b:96:40:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:15.964888 containerd[1453]: 2025-01-30 13:52:15.959 [INFO][4448] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89" Namespace="kube-system" Pod="coredns-6f6b679f8f-w6jzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:15.985624 containerd[1453]: time="2025-01-30T13:52:15.984831415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:15.985624 containerd[1453]: time="2025-01-30T13:52:15.985563190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:15.985624 containerd[1453]: time="2025-01-30T13:52:15.985579671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:15.985824 containerd[1453]: time="2025-01-30T13:52:15.985707731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:16.017922 systemd[1]: Started cri-containerd-cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89.scope - libcontainer container cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89. Jan 30 13:52:16.030575 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:16.054863 containerd[1453]: time="2025-01-30T13:52:16.054811728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w6jzk,Uid:83cb02d6-877c-42c9-9de7-939337ef1dd0,Namespace:kube-system,Attempt:1,} returns sandbox id \"cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89\"" Jan 30 13:52:16.056132 kubelet[2510]: E0130 13:52:16.056084 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:16.058754 containerd[1453]: time="2025-01-30T13:52:16.058527289Z" level=info msg="CreateContainer within sandbox \"cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:52:16.079807 containerd[1453]: time="2025-01-30T13:52:16.079740391Z" level=info msg="CreateContainer within sandbox \"cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c075d73163e5967dbae6e984b453b3a05e9e71440eddf77e2434b8d7559df59e\"" Jan 30 13:52:16.080423 containerd[1453]: time="2025-01-30T13:52:16.080391105Z" level=info msg="StartContainer for \"c075d73163e5967dbae6e984b453b3a05e9e71440eddf77e2434b8d7559df59e\"" Jan 30 13:52:16.106909 systemd[1]: Started cri-containerd-c075d73163e5967dbae6e984b453b3a05e9e71440eddf77e2434b8d7559df59e.scope - libcontainer container c075d73163e5967dbae6e984b453b3a05e9e71440eddf77e2434b8d7559df59e. Jan 30 13:52:16.133311 containerd[1453]: time="2025-01-30T13:52:16.133253245Z" level=info msg="StartContainer for \"c075d73163e5967dbae6e984b453b3a05e9e71440eddf77e2434b8d7559df59e\" returns successfully" Jan 30 13:52:16.539935 systemd-networkd[1401]: cali33c093dbbde: Gained IPv6LL Jan 30 13:52:16.611587 containerd[1453]: time="2025-01-30T13:52:16.611538940Z" level=info msg="StopPodSandbox for \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\"" Jan 30 13:52:16.783401 kubelet[2510]: E0130 13:52:16.783358 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:16.783805 kubelet[2510]: E0130 13:52:16.783498 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:16.799394 kubelet[2510]: I0130 13:52:16.798748 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-w6jzk" podStartSLOduration=33.798730663 podStartE2EDuration="33.798730663s" podCreationTimestamp="2025-01-30 13:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:16.798207851 +0000 UTC m=+41.279265066" watchObservedRunningTime="2025-01-30 13:52:16.798730663 +0000 UTC m=+41.279787878" Jan 30 13:52:16.833586 containerd[1453]: 2025-01-30 13:52:16.790 [INFO][4602] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:16.833586 containerd[1453]: 2025-01-30 13:52:16.790 [INFO][4602] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" iface="eth0" netns="/var/run/netns/cni-7991c493-70d8-4763-3698-0c3879af4bc2" Jan 30 13:52:16.833586 containerd[1453]: 2025-01-30 13:52:16.790 [INFO][4602] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" iface="eth0" netns="/var/run/netns/cni-7991c493-70d8-4763-3698-0c3879af4bc2" Jan 30 13:52:16.833586 containerd[1453]: 2025-01-30 13:52:16.791 [INFO][4602] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" iface="eth0" netns="/var/run/netns/cni-7991c493-70d8-4763-3698-0c3879af4bc2" Jan 30 13:52:16.833586 containerd[1453]: 2025-01-30 13:52:16.791 [INFO][4602] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:16.833586 containerd[1453]: 2025-01-30 13:52:16.791 [INFO][4602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:16.833586 containerd[1453]: 2025-01-30 13:52:16.819 [INFO][4610] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" HandleID="k8s-pod-network.9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Workload="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:16.833586 containerd[1453]: 2025-01-30 13:52:16.820 [INFO][4610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:16.833586 containerd[1453]: 2025-01-30 13:52:16.820 [INFO][4610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:16.833586 containerd[1453]: 2025-01-30 13:52:16.826 [WARNING][4610] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" HandleID="k8s-pod-network.9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Workload="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:16.833586 containerd[1453]: 2025-01-30 13:52:16.826 [INFO][4610] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" HandleID="k8s-pod-network.9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Workload="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:16.833586 containerd[1453]: 2025-01-30 13:52:16.827 [INFO][4610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:16.833586 containerd[1453]: 2025-01-30 13:52:16.830 [INFO][4602] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:16.834323 containerd[1453]: time="2025-01-30T13:52:16.833917037Z" level=info msg="TearDown network for sandbox \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\" successfully" Jan 30 13:52:16.834323 containerd[1453]: time="2025-01-30T13:52:16.833948576Z" level=info msg="StopPodSandbox for \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\" returns successfully" Jan 30 13:52:16.834916 containerd[1453]: time="2025-01-30T13:52:16.834661876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f465bb95b-wgbwq,Uid:2e76c3ad-2358-49df-8767-24e970a1ef0c,Namespace:calico-system,Attempt:1,}" Jan 30 13:52:16.836826 systemd[1]: run-netns-cni\x2d7991c493\x2d70d8\x2d4763\x2d3698\x2d0c3879af4bc2.mount: Deactivated successfully. Jan 30 13:52:17.110505 systemd[1]: Started sshd@11-10.0.0.119:22-10.0.0.1:43116.service - OpenSSH per-connection server daemon (10.0.0.1:43116). Jan 30 13:52:17.128111 systemd-networkd[1401]: cali0767456de23: Link UP Jan 30 13:52:17.128334 systemd-networkd[1401]: cali0767456de23: Gained carrier Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.885 [INFO][4622] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0 calico-kube-controllers-7f465bb95b- calico-system 2e76c3ad-2358-49df-8767-24e970a1ef0c 941 0 2025-01-30 13:51:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f465bb95b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f465bb95b-wgbwq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0767456de23 [] []}} ContainerID="52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" Namespace="calico-system" Pod="calico-kube-controllers-7f465bb95b-wgbwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-" Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.885 [INFO][4622] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" Namespace="calico-system" Pod="calico-kube-controllers-7f465bb95b-wgbwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.921 [INFO][4635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" HandleID="k8s-pod-network.52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" Workload="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.930 [INFO][4635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" HandleID="k8s-pod-network.52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" Workload="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050220), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f465bb95b-wgbwq", "timestamp":"2025-01-30 13:52:16.921134791 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.931 [INFO][4635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.931 [INFO][4635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.931 [INFO][4635] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.933 [INFO][4635] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" host="localhost" Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.936 [INFO][4635] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.940 [INFO][4635] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.942 [INFO][4635] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.945 [INFO][4635] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.945 [INFO][4635] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" host="localhost" Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:16.947 [INFO][4635] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41 Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:17.004 [INFO][4635] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" host="localhost" Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:17.122 [INFO][4635] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" host="localhost" Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:17.122 [INFO][4635] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" host="localhost" Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:17.122 [INFO][4635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:17.223704 containerd[1453]: 2025-01-30 13:52:17.122 [INFO][4635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" HandleID="k8s-pod-network.52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" Workload="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:17.224326 containerd[1453]: 2025-01-30 13:52:17.125 [INFO][4622] cni-plugin/k8s.go 386: Populated endpoint ContainerID="52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" Namespace="calico-system" Pod="calico-kube-controllers-7f465bb95b-wgbwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0", GenerateName:"calico-kube-controllers-7f465bb95b-", Namespace:"calico-system", SelfLink:"", UID:"2e76c3ad-2358-49df-8767-24e970a1ef0c", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f465bb95b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f465bb95b-wgbwq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0767456de23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:17.224326 containerd[1453]: 2025-01-30 13:52:17.125 [INFO][4622] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" Namespace="calico-system" Pod="calico-kube-controllers-7f465bb95b-wgbwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:17.224326 containerd[1453]: 2025-01-30 13:52:17.125 [INFO][4622] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0767456de23 ContainerID="52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" Namespace="calico-system" Pod="calico-kube-controllers-7f465bb95b-wgbwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:17.224326 containerd[1453]: 2025-01-30 13:52:17.128 [INFO][4622] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" Namespace="calico-system" Pod="calico-kube-controllers-7f465bb95b-wgbwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:17.224326 containerd[1453]: 2025-01-30 13:52:17.129 [INFO][4622] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" Namespace="calico-system" Pod="calico-kube-controllers-7f465bb95b-wgbwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0", GenerateName:"calico-kube-controllers-7f465bb95b-", Namespace:"calico-system", SelfLink:"", UID:"2e76c3ad-2358-49df-8767-24e970a1ef0c", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f465bb95b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41", Pod:"calico-kube-controllers-7f465bb95b-wgbwq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0767456de23", MAC:"ce:e1:29:bf:b0:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:17.224326 containerd[1453]: 2025-01-30 13:52:17.220 [INFO][4622] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41" Namespace="calico-system" Pod="calico-kube-controllers-7f465bb95b-wgbwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:17.333928 sshd[4644]: Accepted publickey for core from 10.0.0.1 port 43116 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:17.335897 sshd[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:17.339941 systemd-logind[1441]: New session 12 of user core. Jan 30 13:52:17.348986 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:52:17.360500 containerd[1453]: time="2025-01-30T13:52:17.359333301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:17.360500 containerd[1453]: time="2025-01-30T13:52:17.359372735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:17.360500 containerd[1453]: time="2025-01-30T13:52:17.359382142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:17.360500 containerd[1453]: time="2025-01-30T13:52:17.359448777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:17.388932 systemd[1]: Started cri-containerd-52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41.scope - libcontainer container 52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41. Jan 30 13:52:17.401514 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:17.433879 containerd[1453]: time="2025-01-30T13:52:17.433829249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f465bb95b-wgbwq,Uid:2e76c3ad-2358-49df-8767-24e970a1ef0c,Namespace:calico-system,Attempt:1,} returns sandbox id \"52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41\"" Jan 30 13:52:17.495942 sshd[4644]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:17.501063 systemd-networkd[1401]: calid3b5dfbd275: Gained IPv6LL Jan 30 13:52:17.505163 systemd[1]: sshd@11-10.0.0.119:22-10.0.0.1:43116.service: Deactivated successfully. Jan 30 13:52:17.507276 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:52:17.507990 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:52:17.512051 systemd-logind[1441]: Removed session 12. Jan 30 13:52:17.611413 containerd[1453]: time="2025-01-30T13:52:17.611358803Z" level=info msg="StopPodSandbox for \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\"" Jan 30 13:52:17.790107 kubelet[2510]: E0130 13:52:17.789726 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:17.790107 kubelet[2510]: E0130 13:52:17.789938 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:17.825731 containerd[1453]: 2025-01-30 13:52:17.785 [INFO][4737] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:17.825731 containerd[1453]: 2025-01-30 13:52:17.786 [INFO][4737] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" iface="eth0" netns="/var/run/netns/cni-9844c1bd-fc86-74bc-434e-f332c43db980" Jan 30 13:52:17.825731 containerd[1453]: 2025-01-30 13:52:17.786 [INFO][4737] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" iface="eth0" netns="/var/run/netns/cni-9844c1bd-fc86-74bc-434e-f332c43db980" Jan 30 13:52:17.825731 containerd[1453]: 2025-01-30 13:52:17.788 [INFO][4737] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" iface="eth0" netns="/var/run/netns/cni-9844c1bd-fc86-74bc-434e-f332c43db980" Jan 30 13:52:17.825731 containerd[1453]: 2025-01-30 13:52:17.788 [INFO][4737] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:17.825731 containerd[1453]: 2025-01-30 13:52:17.788 [INFO][4737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:17.825731 containerd[1453]: 2025-01-30 13:52:17.811 [INFO][4744] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" HandleID="k8s-pod-network.45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Workload="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:17.825731 containerd[1453]: 2025-01-30 13:52:17.812 [INFO][4744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:17.825731 containerd[1453]: 2025-01-30 13:52:17.812 [INFO][4744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:17.825731 containerd[1453]: 2025-01-30 13:52:17.818 [WARNING][4744] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" HandleID="k8s-pod-network.45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Workload="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:17.825731 containerd[1453]: 2025-01-30 13:52:17.818 [INFO][4744] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" HandleID="k8s-pod-network.45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Workload="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:17.825731 containerd[1453]: 2025-01-30 13:52:17.819 [INFO][4744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:17.825731 containerd[1453]: 2025-01-30 13:52:17.822 [INFO][4737] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:17.826260 containerd[1453]: time="2025-01-30T13:52:17.825947082Z" level=info msg="TearDown network for sandbox \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\" successfully" Jan 30 13:52:17.826260 containerd[1453]: time="2025-01-30T13:52:17.825974122Z" level=info msg="StopPodSandbox for \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\" returns successfully" Jan 30 13:52:17.826700 containerd[1453]: time="2025-01-30T13:52:17.826666773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f996cd869-ggsld,Uid:4ef025b4-c416-42de-a536-3742569ad063,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:52:17.829103 systemd[1]: run-netns-cni\x2d9844c1bd\x2dfc86\x2d74bc\x2d434e\x2df332c43db980.mount: Deactivated successfully. Jan 30 13:52:18.600682 containerd[1453]: time="2025-01-30T13:52:18.600635453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:18.601662 containerd[1453]: time="2025-01-30T13:52:18.601477315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:52:18.603465 containerd[1453]: time="2025-01-30T13:52:18.603407951Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:18.606356 containerd[1453]: time="2025-01-30T13:52:18.606308159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:18.607509 containerd[1453]: time="2025-01-30T13:52:18.607478106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.991871798s" Jan 30 13:52:18.609936 containerd[1453]: time="2025-01-30T13:52:18.609904754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:52:18.613160 containerd[1453]: time="2025-01-30T13:52:18.613118521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:52:18.613920 containerd[1453]: time="2025-01-30T13:52:18.613888817Z" level=info msg="CreateContainer within sandbox \"04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:52:18.627091 containerd[1453]: time="2025-01-30T13:52:18.627038526Z" level=info msg="CreateContainer within sandbox \"04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1193b5b1258930b9203e39f1cffa668a5e01d22a3dfbcf2ee8ec2dffc440d5bc\"" Jan 30 13:52:18.627644 containerd[1453]: time="2025-01-30T13:52:18.627612263Z" level=info msg="StartContainer for \"1193b5b1258930b9203e39f1cffa668a5e01d22a3dfbcf2ee8ec2dffc440d5bc\"" Jan 30 13:52:18.672101 systemd[1]: Started cri-containerd-1193b5b1258930b9203e39f1cffa668a5e01d22a3dfbcf2ee8ec2dffc440d5bc.scope - libcontainer container 1193b5b1258930b9203e39f1cffa668a5e01d22a3dfbcf2ee8ec2dffc440d5bc. Jan 30 13:52:18.702145 systemd-networkd[1401]: caliba5885cf6ef: Link UP Jan 30 13:52:18.703341 systemd-networkd[1401]: caliba5885cf6ef: Gained carrier Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.622 [INFO][4760] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0 calico-apiserver-f996cd869- calico-apiserver 4ef025b4-c416-42de-a536-3742569ad063 964 0 2025-01-30 13:51:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f996cd869 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f996cd869-ggsld eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliba5885cf6ef [] []}} ContainerID="86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-ggsld" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--ggsld-" Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.623 [INFO][4760] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-ggsld" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.656 [INFO][4775] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" HandleID="k8s-pod-network.86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" Workload="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.666 [INFO][4775] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" HandleID="k8s-pod-network.86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" Workload="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000317ee0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f996cd869-ggsld", "timestamp":"2025-01-30 13:52:18.656416275 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.666 [INFO][4775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.666 [INFO][4775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.666 [INFO][4775] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.669 [INFO][4775] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" host="localhost" Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.673 [INFO][4775] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.678 [INFO][4775] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.680 [INFO][4775] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.682 [INFO][4775] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.682 [INFO][4775] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" host="localhost" Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.684 [INFO][4775] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.688 [INFO][4775] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" host="localhost" Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.694 [INFO][4775] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" host="localhost" Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.694 [INFO][4775] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" host="localhost" Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.694 [INFO][4775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:18.719876 containerd[1453]: 2025-01-30 13:52:18.694 [INFO][4775] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" HandleID="k8s-pod-network.86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" Workload="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:18.721576 containerd[1453]: 2025-01-30 13:52:18.697 [INFO][4760] cni-plugin/k8s.go 386: Populated endpoint ContainerID="86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-ggsld" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0", GenerateName:"calico-apiserver-f996cd869-", Namespace:"calico-apiserver", SelfLink:"", UID:"4ef025b4-c416-42de-a536-3742569ad063", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f996cd869", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f996cd869-ggsld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba5885cf6ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:18.721576 containerd[1453]: 2025-01-30 13:52:18.697 [INFO][4760] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-ggsld" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:18.721576 containerd[1453]: 2025-01-30 13:52:18.697 [INFO][4760] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba5885cf6ef ContainerID="86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-ggsld" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:18.721576 containerd[1453]: 2025-01-30 13:52:18.700 [INFO][4760] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-ggsld" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:18.721576 containerd[1453]: 2025-01-30 13:52:18.703 [INFO][4760] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-ggsld" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0", GenerateName:"calico-apiserver-f996cd869-", Namespace:"calico-apiserver", SelfLink:"", UID:"4ef025b4-c416-42de-a536-3742569ad063", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f996cd869", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a", Pod:"calico-apiserver-f996cd869-ggsld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba5885cf6ef", MAC:"12:b8:98:71:3c:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:18.721576 containerd[1453]: 2025-01-30 13:52:18.714 [INFO][4760] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a" Namespace="calico-apiserver" Pod="calico-apiserver-f996cd869-ggsld" WorkloadEndpoint="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:18.889033 containerd[1453]: time="2025-01-30T13:52:18.888909011Z" level=info msg="StartContainer for \"1193b5b1258930b9203e39f1cffa668a5e01d22a3dfbcf2ee8ec2dffc440d5bc\" returns successfully" Jan 30 13:52:18.893403 kubelet[2510]: E0130 13:52:18.893368 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:18.906074 kubelet[2510]: I0130 13:52:18.905098 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f996cd869-vf8nc" podStartSLOduration=26.384800089 podStartE2EDuration="30.905080144s" podCreationTimestamp="2025-01-30 13:51:48 +0000 UTC" firstStartedPulling="2025-01-30 13:52:14.092087444 +0000 UTC m=+38.573144659" lastFinishedPulling="2025-01-30 13:52:18.612367499 +0000 UTC m=+43.093424714" observedRunningTime="2025-01-30 13:52:18.902807196 +0000 UTC m=+43.383864411" watchObservedRunningTime="2025-01-30 13:52:18.905080144 +0000 UTC m=+43.386137359" Jan 30 13:52:18.911099 containerd[1453]: time="2025-01-30T13:52:18.910713396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:18.911099 containerd[1453]: time="2025-01-30T13:52:18.910781674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:18.911099 containerd[1453]: time="2025-01-30T13:52:18.910793115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:18.911099 containerd[1453]: time="2025-01-30T13:52:18.910880229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:18.934906 systemd[1]: Started cri-containerd-86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a.scope - libcontainer container 86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a. Jan 30 13:52:18.949598 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:18.979201 containerd[1453]: time="2025-01-30T13:52:18.979150299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f996cd869-ggsld,Uid:4ef025b4-c416-42de-a536-3742569ad063,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a\"" Jan 30 13:52:18.982093 containerd[1453]: time="2025-01-30T13:52:18.982050497Z" level=info msg="CreateContainer within sandbox \"86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:52:18.995999 containerd[1453]: time="2025-01-30T13:52:18.995871887Z" level=info msg="CreateContainer within sandbox \"86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1192c1ccd293f13be6aa45da4b23cbf35af3eeb7c5f92184a677c409d967e9c9\"" Jan 30 13:52:18.996946 containerd[1453]: time="2025-01-30T13:52:18.996728767Z" level=info msg="StartContainer for \"1192c1ccd293f13be6aa45da4b23cbf35af3eeb7c5f92184a677c409d967e9c9\"" Jan 30 13:52:19.026905 systemd[1]: Started cri-containerd-1192c1ccd293f13be6aa45da4b23cbf35af3eeb7c5f92184a677c409d967e9c9.scope - libcontainer container 1192c1ccd293f13be6aa45da4b23cbf35af3eeb7c5f92184a677c409d967e9c9. Jan 30 13:52:19.070210 containerd[1453]: time="2025-01-30T13:52:19.069746601Z" level=info msg="StartContainer for \"1192c1ccd293f13be6aa45da4b23cbf35af3eeb7c5f92184a677c409d967e9c9\" returns successfully" Jan 30 13:52:19.164165 systemd-networkd[1401]: cali0767456de23: Gained IPv6LL Jan 30 13:52:19.989985 containerd[1453]: time="2025-01-30T13:52:19.989934009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:19.990926 containerd[1453]: time="2025-01-30T13:52:19.990848256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:52:19.992188 containerd[1453]: time="2025-01-30T13:52:19.992145462Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:19.994565 containerd[1453]: time="2025-01-30T13:52:19.994494024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:19.995290 containerd[1453]: time="2025-01-30T13:52:19.995248821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.382090196s" Jan 30 13:52:19.995331 containerd[1453]: time="2025-01-30T13:52:19.995292242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:52:19.996389 containerd[1453]: time="2025-01-30T13:52:19.996349648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:52:19.997378 containerd[1453]: time="2025-01-30T13:52:19.997341832Z" level=info msg="CreateContainer within sandbox \"34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:52:20.013422 containerd[1453]: time="2025-01-30T13:52:20.013383227Z" level=info msg="CreateContainer within sandbox \"34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"85ae8e0fdc0ebe3ee602b82fb83774f602b56c14d27b8e29a6ffb1ae9f900ba5\"" Jan 30 13:52:20.014306 containerd[1453]: time="2025-01-30T13:52:20.013864060Z" level=info msg="StartContainer for \"85ae8e0fdc0ebe3ee602b82fb83774f602b56c14d27b8e29a6ffb1ae9f900ba5\"" Jan 30 13:52:20.049563 systemd[1]: Started cri-containerd-85ae8e0fdc0ebe3ee602b82fb83774f602b56c14d27b8e29a6ffb1ae9f900ba5.scope - libcontainer container 85ae8e0fdc0ebe3ee602b82fb83774f602b56c14d27b8e29a6ffb1ae9f900ba5. Jan 30 13:52:20.160344 containerd[1453]: time="2025-01-30T13:52:20.160301468Z" level=info msg="StartContainer for \"85ae8e0fdc0ebe3ee602b82fb83774f602b56c14d27b8e29a6ffb1ae9f900ba5\" returns successfully" Jan 30 13:52:20.251995 systemd-networkd[1401]: caliba5885cf6ef: Gained IPv6LL Jan 30 13:52:20.666631 kubelet[2510]: I0130 13:52:20.666467 2510 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:52:20.666631 kubelet[2510]: I0130 13:52:20.666519 2510 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:52:20.768926 systemd[1]: run-containerd-runc-k8s.io-85ae8e0fdc0ebe3ee602b82fb83774f602b56c14d27b8e29a6ffb1ae9f900ba5-runc.WWq61W.mount: Deactivated successfully. Jan 30 13:52:20.900570 kubelet[2510]: I0130 13:52:20.900539 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:52:20.914965 kubelet[2510]: I0130 13:52:20.914757 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lzkrl" podStartSLOduration=26.915643554 podStartE2EDuration="32.914739054s" podCreationTimestamp="2025-01-30 13:51:48 +0000 UTC" firstStartedPulling="2025-01-30 13:52:13.997031871 +0000 UTC m=+38.478089086" lastFinishedPulling="2025-01-30 13:52:19.996127381 +0000 UTC m=+44.477184586" observedRunningTime="2025-01-30 13:52:20.914323814 +0000 UTC m=+45.395381029" watchObservedRunningTime="2025-01-30 13:52:20.914739054 +0000 UTC m=+45.395796269" Jan 30 13:52:20.915147 kubelet[2510]: I0130 13:52:20.915048 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f996cd869-ggsld" podStartSLOduration=32.915042083 podStartE2EDuration="32.915042083s" podCreationTimestamp="2025-01-30 13:51:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:19.908708741 +0000 UTC m=+44.389765966" watchObservedRunningTime="2025-01-30 13:52:20.915042083 +0000 UTC m=+45.396099298" Jan 30 13:52:22.383749 containerd[1453]: time="2025-01-30T13:52:22.383682807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:22.384647 containerd[1453]: time="2025-01-30T13:52:22.384609367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:52:22.386132 containerd[1453]: time="2025-01-30T13:52:22.386068366Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:22.388550 containerd[1453]: time="2025-01-30T13:52:22.388507226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:22.389086 containerd[1453]: time="2025-01-30T13:52:22.389040988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.392661433s" Jan 30 13:52:22.389086 containerd[1453]: time="2025-01-30T13:52:22.389073208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:52:22.397366 containerd[1453]: time="2025-01-30T13:52:22.397313761Z" level=info msg="CreateContainer within sandbox \"52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:52:22.416629 containerd[1453]: time="2025-01-30T13:52:22.416573463Z" level=info msg="CreateContainer within sandbox \"52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"75c185537abd5115cc5b60f2498d736f9703ac591172a79b3a5122a8839033e8\"" Jan 30 13:52:22.417158 containerd[1453]: time="2025-01-30T13:52:22.417036192Z" level=info msg="StartContainer for \"75c185537abd5115cc5b60f2498d736f9703ac591172a79b3a5122a8839033e8\"" Jan 30 13:52:22.445903 systemd[1]: Started cri-containerd-75c185537abd5115cc5b60f2498d736f9703ac591172a79b3a5122a8839033e8.scope - libcontainer container 75c185537abd5115cc5b60f2498d736f9703ac591172a79b3a5122a8839033e8. Jan 30 13:52:22.487129 containerd[1453]: time="2025-01-30T13:52:22.487078764Z" level=info msg="StartContainer for \"75c185537abd5115cc5b60f2498d736f9703ac591172a79b3a5122a8839033e8\" returns successfully" Jan 30 13:52:22.509463 systemd[1]: Started sshd@12-10.0.0.119:22-10.0.0.1:38840.service - OpenSSH per-connection server daemon (10.0.0.1:38840). Jan 30 13:52:22.558131 sshd[5013]: Accepted publickey for core from 10.0.0.1 port 38840 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:22.560566 sshd[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:22.570192 systemd-logind[1441]: New session 13 of user core. Jan 30 13:52:22.575010 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:52:22.708239 sshd[5013]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:22.720499 systemd[1]: sshd@12-10.0.0.119:22-10.0.0.1:38840.service: Deactivated successfully. Jan 30 13:52:22.723007 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:52:22.723958 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:52:22.734105 systemd[1]: Started sshd@13-10.0.0.119:22-10.0.0.1:38842.service - OpenSSH per-connection server daemon (10.0.0.1:38842). Jan 30 13:52:22.734991 systemd-logind[1441]: Removed session 13. Jan 30 13:52:22.766620 sshd[5028]: Accepted publickey for core from 10.0.0.1 port 38842 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:22.768376 sshd[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:22.772976 systemd-logind[1441]: New session 14 of user core. Jan 30 13:52:22.782917 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:52:22.972681 sshd[5028]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:22.987520 systemd[1]: sshd@13-10.0.0.119:22-10.0.0.1:38842.service: Deactivated successfully. Jan 30 13:52:22.995330 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:52:22.997063 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:52:23.014464 systemd[1]: Started sshd@14-10.0.0.119:22-10.0.0.1:38848.service - OpenSSH per-connection server daemon (10.0.0.1:38848). Jan 30 13:52:23.015999 systemd-logind[1441]: Removed session 14. Jan 30 13:52:23.047152 kubelet[2510]: I0130 13:52:23.047057 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f465bb95b-wgbwq" podStartSLOduration=30.093545517 podStartE2EDuration="35.047001686s" podCreationTimestamp="2025-01-30 13:51:48 +0000 UTC" firstStartedPulling="2025-01-30 13:52:17.436299009 +0000 UTC m=+41.917356224" lastFinishedPulling="2025-01-30 13:52:22.389755178 +0000 UTC m=+46.870812393" observedRunningTime="2025-01-30 13:52:22.94036521 +0000 UTC m=+47.421422425" watchObservedRunningTime="2025-01-30 13:52:23.047001686 +0000 UTC m=+47.528058901" Jan 30 13:52:23.050072 sshd[5058]: Accepted publickey for core from 10.0.0.1 port 38848 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:23.052380 sshd[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:23.061752 systemd-logind[1441]: New session 15 of user core. Jan 30 13:52:23.066952 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:52:23.196396 sshd[5058]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:23.201364 systemd[1]: sshd@14-10.0.0.119:22-10.0.0.1:38848.service: Deactivated successfully. Jan 30 13:52:23.203462 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:52:23.204414 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:52:23.205405 systemd-logind[1441]: Removed session 15. Jan 30 13:52:28.214580 systemd[1]: Started sshd@15-10.0.0.119:22-10.0.0.1:54622.service - OpenSSH per-connection server daemon (10.0.0.1:54622). Jan 30 13:52:28.804498 sshd[5084]: Accepted publickey for core from 10.0.0.1 port 54622 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:28.806179 sshd[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:28.810668 systemd-logind[1441]: New session 16 of user core. Jan 30 13:52:28.825978 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:52:28.959338 sshd[5084]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:28.964458 systemd[1]: sshd@15-10.0.0.119:22-10.0.0.1:54622.service: Deactivated successfully. Jan 30 13:52:28.967019 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:52:28.967696 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:52:28.968592 systemd-logind[1441]: Removed session 16. Jan 30 13:52:30.660726 kubelet[2510]: E0130 13:52:30.660632 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:33.970463 systemd[1]: Started sshd@16-10.0.0.119:22-10.0.0.1:54638.service - OpenSSH per-connection server daemon (10.0.0.1:54638). Jan 30 13:52:34.004253 sshd[5127]: Accepted publickey for core from 10.0.0.1 port 54638 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:34.005711 sshd[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:34.009252 systemd-logind[1441]: New session 17 of user core. Jan 30 13:52:34.015872 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:52:34.121557 sshd[5127]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:34.125299 systemd[1]: sshd@16-10.0.0.119:22-10.0.0.1:54638.service: Deactivated successfully. Jan 30 13:52:34.127332 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:52:34.127931 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:52:34.128864 systemd-logind[1441]: Removed session 17. Jan 30 13:52:35.599566 containerd[1453]: time="2025-01-30T13:52:35.599528605Z" level=info msg="StopPodSandbox for \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\"" Jan 30 13:52:35.666257 containerd[1453]: 2025-01-30 13:52:35.634 [WARNING][5175] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0", GenerateName:"calico-apiserver-f996cd869-", Namespace:"calico-apiserver", SelfLink:"", UID:"4ef025b4-c416-42de-a536-3742569ad063", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f996cd869", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a", Pod:"calico-apiserver-f996cd869-ggsld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba5885cf6ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:35.666257 containerd[1453]: 2025-01-30 13:52:35.634 [INFO][5175] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:35.666257 containerd[1453]: 2025-01-30 13:52:35.634 [INFO][5175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" iface="eth0" netns="" Jan 30 13:52:35.666257 containerd[1453]: 2025-01-30 13:52:35.634 [INFO][5175] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:35.666257 containerd[1453]: 2025-01-30 13:52:35.634 [INFO][5175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:35.666257 containerd[1453]: 2025-01-30 13:52:35.654 [INFO][5186] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" HandleID="k8s-pod-network.45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Workload="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:35.666257 containerd[1453]: 2025-01-30 13:52:35.655 [INFO][5186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:35.666257 containerd[1453]: 2025-01-30 13:52:35.655 [INFO][5186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:35.666257 containerd[1453]: 2025-01-30 13:52:35.659 [WARNING][5186] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" HandleID="k8s-pod-network.45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Workload="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:35.666257 containerd[1453]: 2025-01-30 13:52:35.659 [INFO][5186] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" HandleID="k8s-pod-network.45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Workload="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:35.666257 containerd[1453]: 2025-01-30 13:52:35.660 [INFO][5186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:35.666257 containerd[1453]: 2025-01-30 13:52:35.663 [INFO][5175] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:35.666847 containerd[1453]: time="2025-01-30T13:52:35.666296072Z" level=info msg="TearDown network for sandbox \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\" successfully" Jan 30 13:52:35.666847 containerd[1453]: time="2025-01-30T13:52:35.666320158Z" level=info msg="StopPodSandbox for \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\" returns successfully" Jan 30 13:52:35.673213 containerd[1453]: time="2025-01-30T13:52:35.673168908Z" level=info msg="RemovePodSandbox for \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\"" Jan 30 13:52:35.675909 containerd[1453]: time="2025-01-30T13:52:35.675830482Z" level=info msg="Forcibly stopping sandbox \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\"" Jan 30 13:52:35.739526 containerd[1453]: 2025-01-30 13:52:35.708 [WARNING][5209] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0", GenerateName:"calico-apiserver-f996cd869-", Namespace:"calico-apiserver", SelfLink:"", UID:"4ef025b4-c416-42de-a536-3742569ad063", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f996cd869", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86130fe3cb15fa6736e97dcd27d451ce065a7f3b58b3c41c941141f1a476dc7a", Pod:"calico-apiserver-f996cd869-ggsld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba5885cf6ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:35.739526 containerd[1453]: 2025-01-30 13:52:35.708 [INFO][5209] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:35.739526 containerd[1453]: 2025-01-30 13:52:35.708 [INFO][5209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" iface="eth0" netns="" Jan 30 13:52:35.739526 containerd[1453]: 2025-01-30 13:52:35.708 [INFO][5209] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:35.739526 containerd[1453]: 2025-01-30 13:52:35.708 [INFO][5209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:35.739526 containerd[1453]: 2025-01-30 13:52:35.728 [INFO][5216] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" HandleID="k8s-pod-network.45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Workload="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:35.739526 containerd[1453]: 2025-01-30 13:52:35.729 [INFO][5216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:35.739526 containerd[1453]: 2025-01-30 13:52:35.729 [INFO][5216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:35.739526 containerd[1453]: 2025-01-30 13:52:35.733 [WARNING][5216] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" HandleID="k8s-pod-network.45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Workload="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:35.739526 containerd[1453]: 2025-01-30 13:52:35.733 [INFO][5216] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" HandleID="k8s-pod-network.45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Workload="localhost-k8s-calico--apiserver--f996cd869--ggsld-eth0" Jan 30 13:52:35.739526 containerd[1453]: 2025-01-30 13:52:35.734 [INFO][5216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:35.739526 containerd[1453]: 2025-01-30 13:52:35.737 [INFO][5209] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582" Jan 30 13:52:35.740056 containerd[1453]: time="2025-01-30T13:52:35.739568345Z" level=info msg="TearDown network for sandbox \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\" successfully" Jan 30 13:52:35.814302 containerd[1453]: time="2025-01-30T13:52:35.814253309Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:35.814405 containerd[1453]: time="2025-01-30T13:52:35.814323190Z" level=info msg="RemovePodSandbox \"45450275944a079f799a9691c6d08435705100c439dd900b8582e0cb13bbb582\" returns successfully" Jan 30 13:52:35.814724 containerd[1453]: time="2025-01-30T13:52:35.814704316Z" level=info msg="StopPodSandbox for \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\"" Jan 30 13:52:35.884122 containerd[1453]: 2025-01-30 13:52:35.850 [WARNING][5240] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"83cb02d6-877c-42c9-9de7-939337ef1dd0", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89", Pod:"coredns-6f6b679f8f-w6jzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3b5dfbd275", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:35.884122 containerd[1453]: 2025-01-30 13:52:35.850 [INFO][5240] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:35.884122 containerd[1453]: 2025-01-30 13:52:35.850 [INFO][5240] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" iface="eth0" netns="" Jan 30 13:52:35.884122 containerd[1453]: 2025-01-30 13:52:35.850 [INFO][5240] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:35.884122 containerd[1453]: 2025-01-30 13:52:35.850 [INFO][5240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:35.884122 containerd[1453]: 2025-01-30 13:52:35.872 [INFO][5247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" HandleID="k8s-pod-network.16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Workload="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:35.884122 containerd[1453]: 2025-01-30 13:52:35.872 [INFO][5247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:35.884122 containerd[1453]: 2025-01-30 13:52:35.872 [INFO][5247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:35.884122 containerd[1453]: 2025-01-30 13:52:35.877 [WARNING][5247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" HandleID="k8s-pod-network.16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Workload="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:35.884122 containerd[1453]: 2025-01-30 13:52:35.877 [INFO][5247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" HandleID="k8s-pod-network.16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Workload="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:35.884122 containerd[1453]: 2025-01-30 13:52:35.878 [INFO][5247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:35.884122 containerd[1453]: 2025-01-30 13:52:35.880 [INFO][5240] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:35.884122 containerd[1453]: time="2025-01-30T13:52:35.884093181Z" level=info msg="TearDown network for sandbox \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\" successfully" Jan 30 13:52:35.884122 containerd[1453]: time="2025-01-30T13:52:35.884117828Z" level=info msg="StopPodSandbox for \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\" returns successfully" Jan 30 13:52:35.884786 containerd[1453]: time="2025-01-30T13:52:35.884734395Z" level=info msg="RemovePodSandbox for \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\"" Jan 30 13:52:35.884817 containerd[1453]: time="2025-01-30T13:52:35.884799807Z" level=info msg="Forcibly stopping sandbox \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\"" Jan 30 13:52:35.952450 containerd[1453]: 2025-01-30 13:52:35.918 [WARNING][5269] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"83cb02d6-877c-42c9-9de7-939337ef1dd0", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cce5781a1bafc04fa973d2dff972841c36e42179086b71fd032669cd46bfcf89", Pod:"coredns-6f6b679f8f-w6jzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3b5dfbd275", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:35.952450 containerd[1453]: 2025-01-30 13:52:35.918 [INFO][5269] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:35.952450 containerd[1453]: 2025-01-30 13:52:35.918 [INFO][5269] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" iface="eth0" netns="" Jan 30 13:52:35.952450 containerd[1453]: 2025-01-30 13:52:35.918 [INFO][5269] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:35.952450 containerd[1453]: 2025-01-30 13:52:35.918 [INFO][5269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:35.952450 containerd[1453]: 2025-01-30 13:52:35.940 [INFO][5276] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" HandleID="k8s-pod-network.16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Workload="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:35.952450 containerd[1453]: 2025-01-30 13:52:35.940 [INFO][5276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:35.952450 containerd[1453]: 2025-01-30 13:52:35.940 [INFO][5276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:35.952450 containerd[1453]: 2025-01-30 13:52:35.945 [WARNING][5276] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" HandleID="k8s-pod-network.16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Workload="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:35.952450 containerd[1453]: 2025-01-30 13:52:35.945 [INFO][5276] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" HandleID="k8s-pod-network.16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Workload="localhost-k8s-coredns--6f6b679f8f--w6jzk-eth0" Jan 30 13:52:35.952450 containerd[1453]: 2025-01-30 13:52:35.947 [INFO][5276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:35.952450 containerd[1453]: 2025-01-30 13:52:35.949 [INFO][5269] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa" Jan 30 13:52:35.952893 containerd[1453]: time="2025-01-30T13:52:35.952467134Z" level=info msg="TearDown network for sandbox \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\" successfully" Jan 30 13:52:35.956295 containerd[1453]: time="2025-01-30T13:52:35.956268797Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:35.956367 containerd[1453]: time="2025-01-30T13:52:35.956307800Z" level=info msg="RemovePodSandbox \"16469312a10800e0abd857cb9b98d9cb5b32a56073e363090748377266468dfa\" returns successfully" Jan 30 13:52:35.956827 containerd[1453]: time="2025-01-30T13:52:35.956754288Z" level=info msg="StopPodSandbox for \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\"" Jan 30 13:52:36.030376 containerd[1453]: 2025-01-30 13:52:35.992 [WARNING][5298] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0", GenerateName:"calico-apiserver-f996cd869-", Namespace:"calico-apiserver", SelfLink:"", UID:"e495de51-4d7e-4867-b481-e0efba9ff50a", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f996cd869", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e", Pod:"calico-apiserver-f996cd869-vf8nc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali617cc4b5339", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:36.030376 containerd[1453]: 2025-01-30 13:52:35.992 [INFO][5298] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:36.030376 containerd[1453]: 2025-01-30 13:52:35.992 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" iface="eth0" netns="" Jan 30 13:52:36.030376 containerd[1453]: 2025-01-30 13:52:35.992 [INFO][5298] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:36.030376 containerd[1453]: 2025-01-30 13:52:35.992 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:36.030376 containerd[1453]: 2025-01-30 13:52:36.018 [INFO][5305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" HandleID="k8s-pod-network.7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Workload="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:36.030376 containerd[1453]: 2025-01-30 13:52:36.018 [INFO][5305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:36.030376 containerd[1453]: 2025-01-30 13:52:36.018 [INFO][5305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:36.030376 containerd[1453]: 2025-01-30 13:52:36.023 [WARNING][5305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" HandleID="k8s-pod-network.7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Workload="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:36.030376 containerd[1453]: 2025-01-30 13:52:36.023 [INFO][5305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" HandleID="k8s-pod-network.7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Workload="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:36.030376 containerd[1453]: 2025-01-30 13:52:36.025 [INFO][5305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:36.030376 containerd[1453]: 2025-01-30 13:52:36.027 [INFO][5298] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:36.030797 containerd[1453]: time="2025-01-30T13:52:36.030428162Z" level=info msg="TearDown network for sandbox \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\" successfully" Jan 30 13:52:36.030797 containerd[1453]: time="2025-01-30T13:52:36.030454071Z" level=info msg="StopPodSandbox for \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\" returns successfully" Jan 30 13:52:36.031034 containerd[1453]: time="2025-01-30T13:52:36.030996649Z" level=info msg="RemovePodSandbox for \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\"" Jan 30 13:52:36.031073 containerd[1453]: time="2025-01-30T13:52:36.031034641Z" level=info msg="Forcibly stopping sandbox \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\"" Jan 30 13:52:36.097729 containerd[1453]: 2025-01-30 13:52:36.067 [WARNING][5327] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0", GenerateName:"calico-apiserver-f996cd869-", Namespace:"calico-apiserver", SelfLink:"", UID:"e495de51-4d7e-4867-b481-e0efba9ff50a", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f996cd869", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04560319bb1402f35c4bde8096c423cb9d09384d8aa79881008f9c523c54267e", Pod:"calico-apiserver-f996cd869-vf8nc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali617cc4b5339", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:36.097729 containerd[1453]: 2025-01-30 13:52:36.067 [INFO][5327] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:36.097729 containerd[1453]: 2025-01-30 13:52:36.067 [INFO][5327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" iface="eth0" netns="" Jan 30 13:52:36.097729 containerd[1453]: 2025-01-30 13:52:36.067 [INFO][5327] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:36.097729 containerd[1453]: 2025-01-30 13:52:36.067 [INFO][5327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:36.097729 containerd[1453]: 2025-01-30 13:52:36.086 [INFO][5335] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" HandleID="k8s-pod-network.7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Workload="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:36.097729 containerd[1453]: 2025-01-30 13:52:36.086 [INFO][5335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:36.097729 containerd[1453]: 2025-01-30 13:52:36.086 [INFO][5335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:36.097729 containerd[1453]: 2025-01-30 13:52:36.091 [WARNING][5335] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" HandleID="k8s-pod-network.7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Workload="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:36.097729 containerd[1453]: 2025-01-30 13:52:36.091 [INFO][5335] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" HandleID="k8s-pod-network.7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Workload="localhost-k8s-calico--apiserver--f996cd869--vf8nc-eth0" Jan 30 13:52:36.097729 containerd[1453]: 2025-01-30 13:52:36.092 [INFO][5335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:36.097729 containerd[1453]: 2025-01-30 13:52:36.095 [INFO][5327] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2" Jan 30 13:52:36.098146 containerd[1453]: time="2025-01-30T13:52:36.097777158Z" level=info msg="TearDown network for sandbox \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\" successfully" Jan 30 13:52:36.101589 containerd[1453]: time="2025-01-30T13:52:36.101561519Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:36.101650 containerd[1453]: time="2025-01-30T13:52:36.101610751Z" level=info msg="RemovePodSandbox \"7daa727e4b500fcabb87bb1bef01b1e952a0dd243c5d5d0286272c3191afa1a2\" returns successfully" Jan 30 13:52:36.102098 containerd[1453]: time="2025-01-30T13:52:36.102063720Z" level=info msg="StopPodSandbox for \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\"" Jan 30 13:52:36.169311 containerd[1453]: 2025-01-30 13:52:36.135 [WARNING][5358] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ce9daa1b-6e98-4839-9f3b-00c7bb80d288", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd", Pod:"coredns-6f6b679f8f-hc2vh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33c093dbbde", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:36.169311 containerd[1453]: 2025-01-30 13:52:36.135 [INFO][5358] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:36.169311 containerd[1453]: 2025-01-30 13:52:36.135 [INFO][5358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" iface="eth0" netns="" Jan 30 13:52:36.169311 containerd[1453]: 2025-01-30 13:52:36.135 [INFO][5358] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:36.169311 containerd[1453]: 2025-01-30 13:52:36.135 [INFO][5358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:36.169311 containerd[1453]: 2025-01-30 13:52:36.156 [INFO][5365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" HandleID="k8s-pod-network.b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Workload="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:36.169311 containerd[1453]: 2025-01-30 13:52:36.157 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:36.169311 containerd[1453]: 2025-01-30 13:52:36.157 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:36.169311 containerd[1453]: 2025-01-30 13:52:36.162 [WARNING][5365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" HandleID="k8s-pod-network.b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Workload="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:36.169311 containerd[1453]: 2025-01-30 13:52:36.162 [INFO][5365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" HandleID="k8s-pod-network.b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Workload="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:36.169311 containerd[1453]: 2025-01-30 13:52:36.163 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:36.169311 containerd[1453]: 2025-01-30 13:52:36.166 [INFO][5358] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:36.169311 containerd[1453]: time="2025-01-30T13:52:36.169284224Z" level=info msg="TearDown network for sandbox \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\" successfully" Jan 30 13:52:36.169311 containerd[1453]: time="2025-01-30T13:52:36.169307268Z" level=info msg="StopPodSandbox for \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\" returns successfully" Jan 30 13:52:36.169785 containerd[1453]: time="2025-01-30T13:52:36.169751070Z" level=info msg="RemovePodSandbox for \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\"" Jan 30 13:52:36.169867 containerd[1453]: time="2025-01-30T13:52:36.169838144Z" level=info msg="Forcibly stopping sandbox \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\"" Jan 30 13:52:36.240562 containerd[1453]: 2025-01-30 13:52:36.203 [WARNING][5387] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ce9daa1b-6e98-4839-9f3b-00c7bb80d288", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb3a63d316abd46dd83eb7e0e4cbda0c1528747e5390c496cd530fc3fcaf3afd", Pod:"coredns-6f6b679f8f-hc2vh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33c093dbbde", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:36.240562 containerd[1453]: 2025-01-30 13:52:36.203 [INFO][5387] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:36.240562 containerd[1453]: 2025-01-30 13:52:36.203 [INFO][5387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" iface="eth0" netns="" Jan 30 13:52:36.240562 containerd[1453]: 2025-01-30 13:52:36.203 [INFO][5387] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:36.240562 containerd[1453]: 2025-01-30 13:52:36.203 [INFO][5387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:36.240562 containerd[1453]: 2025-01-30 13:52:36.227 [INFO][5394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" HandleID="k8s-pod-network.b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Workload="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:36.240562 containerd[1453]: 2025-01-30 13:52:36.228 [INFO][5394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:36.240562 containerd[1453]: 2025-01-30 13:52:36.228 [INFO][5394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:36.240562 containerd[1453]: 2025-01-30 13:52:36.233 [WARNING][5394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" HandleID="k8s-pod-network.b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Workload="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:36.240562 containerd[1453]: 2025-01-30 13:52:36.233 [INFO][5394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" HandleID="k8s-pod-network.b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Workload="localhost-k8s-coredns--6f6b679f8f--hc2vh-eth0" Jan 30 13:52:36.240562 containerd[1453]: 2025-01-30 13:52:36.234 [INFO][5394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:36.240562 containerd[1453]: 2025-01-30 13:52:36.237 [INFO][5387] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c" Jan 30 13:52:36.240971 containerd[1453]: time="2025-01-30T13:52:36.240628726Z" level=info msg="TearDown network for sandbox \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\" successfully" Jan 30 13:52:36.245940 containerd[1453]: time="2025-01-30T13:52:36.245900117Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:36.246034 containerd[1453]: time="2025-01-30T13:52:36.245958647Z" level=info msg="RemovePodSandbox \"b9f5b22a0b33c71e21a9f1aba9134edad69b3d5a2129b2e1e75990ec22a40c2c\" returns successfully" Jan 30 13:52:36.246479 containerd[1453]: time="2025-01-30T13:52:36.246454136Z" level=info msg="StopPodSandbox for \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\"" Jan 30 13:52:36.327376 containerd[1453]: 2025-01-30 13:52:36.292 [WARNING][5416] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0", GenerateName:"calico-kube-controllers-7f465bb95b-", Namespace:"calico-system", SelfLink:"", UID:"2e76c3ad-2358-49df-8767-24e970a1ef0c", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f465bb95b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41", Pod:"calico-kube-controllers-7f465bb95b-wgbwq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0767456de23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:36.327376 containerd[1453]: 2025-01-30 13:52:36.292 [INFO][5416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:36.327376 containerd[1453]: 2025-01-30 13:52:36.292 [INFO][5416] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" iface="eth0" netns="" Jan 30 13:52:36.327376 containerd[1453]: 2025-01-30 13:52:36.292 [INFO][5416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:36.327376 containerd[1453]: 2025-01-30 13:52:36.292 [INFO][5416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:36.327376 containerd[1453]: 2025-01-30 13:52:36.316 [INFO][5423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" HandleID="k8s-pod-network.9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Workload="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:36.327376 containerd[1453]: 2025-01-30 13:52:36.316 [INFO][5423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:36.327376 containerd[1453]: 2025-01-30 13:52:36.316 [INFO][5423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:36.327376 containerd[1453]: 2025-01-30 13:52:36.320 [WARNING][5423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" HandleID="k8s-pod-network.9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Workload="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:36.327376 containerd[1453]: 2025-01-30 13:52:36.320 [INFO][5423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" HandleID="k8s-pod-network.9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Workload="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:36.327376 containerd[1453]: 2025-01-30 13:52:36.321 [INFO][5423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:36.327376 containerd[1453]: 2025-01-30 13:52:36.324 [INFO][5416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:36.327753 containerd[1453]: time="2025-01-30T13:52:36.327420671Z" level=info msg="TearDown network for sandbox \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\" successfully" Jan 30 13:52:36.327753 containerd[1453]: time="2025-01-30T13:52:36.327445057Z" level=info msg="StopPodSandbox for \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\" returns successfully" Jan 30 13:52:36.327968 containerd[1453]: time="2025-01-30T13:52:36.327935277Z" level=info msg="RemovePodSandbox for \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\"" Jan 30 13:52:36.327997 containerd[1453]: time="2025-01-30T13:52:36.327968710Z" level=info msg="Forcibly stopping sandbox \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\"" Jan 30 13:52:36.393858 containerd[1453]: 2025-01-30 13:52:36.360 [WARNING][5446] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0", GenerateName:"calico-kube-controllers-7f465bb95b-", Namespace:"calico-system", SelfLink:"", UID:"2e76c3ad-2358-49df-8767-24e970a1ef0c", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f465bb95b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52b26ffa1291c6666498679f958c5b7013060a85d0f9731a5a82e3f972ee6d41", Pod:"calico-kube-controllers-7f465bb95b-wgbwq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0767456de23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:36.393858 containerd[1453]: 2025-01-30 13:52:36.360 [INFO][5446] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:36.393858 containerd[1453]: 2025-01-30 13:52:36.360 [INFO][5446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" iface="eth0" netns="" Jan 30 13:52:36.393858 containerd[1453]: 2025-01-30 13:52:36.360 [INFO][5446] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:36.393858 containerd[1453]: 2025-01-30 13:52:36.360 [INFO][5446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:36.393858 containerd[1453]: 2025-01-30 13:52:36.381 [INFO][5453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" HandleID="k8s-pod-network.9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Workload="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:36.393858 containerd[1453]: 2025-01-30 13:52:36.381 [INFO][5453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:36.393858 containerd[1453]: 2025-01-30 13:52:36.381 [INFO][5453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:36.393858 containerd[1453]: 2025-01-30 13:52:36.386 [WARNING][5453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" HandleID="k8s-pod-network.9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Workload="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:36.393858 containerd[1453]: 2025-01-30 13:52:36.386 [INFO][5453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" HandleID="k8s-pod-network.9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Workload="localhost-k8s-calico--kube--controllers--7f465bb95b--wgbwq-eth0" Jan 30 13:52:36.393858 containerd[1453]: 2025-01-30 13:52:36.388 [INFO][5453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:36.393858 containerd[1453]: 2025-01-30 13:52:36.390 [INFO][5446] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856" Jan 30 13:52:36.394379 containerd[1453]: time="2025-01-30T13:52:36.393892300Z" level=info msg="TearDown network for sandbox \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\" successfully" Jan 30 13:52:36.399403 containerd[1453]: time="2025-01-30T13:52:36.399352945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:36.399485 containerd[1453]: time="2025-01-30T13:52:36.399417657Z" level=info msg="RemovePodSandbox \"9f16c76b597d39841a0ce5cd722c93c104d9fcfd30d5a8fa53b71a0329d9b856\" returns successfully" Jan 30 13:52:36.399966 containerd[1453]: time="2025-01-30T13:52:36.399939526Z" level=info msg="StopPodSandbox for \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\"" Jan 30 13:52:36.465514 containerd[1453]: 2025-01-30 13:52:36.433 [WARNING][5476] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lzkrl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"671740aa-5720-4131-9f78-6538b2c8e710", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a", Pod:"csi-node-driver-lzkrl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e02888b025", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:36.465514 containerd[1453]: 2025-01-30 13:52:36.433 [INFO][5476] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:36.465514 containerd[1453]: 2025-01-30 13:52:36.433 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" iface="eth0" netns="" Jan 30 13:52:36.465514 containerd[1453]: 2025-01-30 13:52:36.433 [INFO][5476] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:36.465514 containerd[1453]: 2025-01-30 13:52:36.433 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:36.465514 containerd[1453]: 2025-01-30 13:52:36.454 [INFO][5483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" HandleID="k8s-pod-network.f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Workload="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:36.465514 containerd[1453]: 2025-01-30 13:52:36.454 [INFO][5483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:36.465514 containerd[1453]: 2025-01-30 13:52:36.454 [INFO][5483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:36.465514 containerd[1453]: 2025-01-30 13:52:36.458 [WARNING][5483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" HandleID="k8s-pod-network.f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Workload="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:36.465514 containerd[1453]: 2025-01-30 13:52:36.458 [INFO][5483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" HandleID="k8s-pod-network.f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Workload="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:36.465514 containerd[1453]: 2025-01-30 13:52:36.459 [INFO][5483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:36.465514 containerd[1453]: 2025-01-30 13:52:36.462 [INFO][5476] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:36.465514 containerd[1453]: time="2025-01-30T13:52:36.465481531Z" level=info msg="TearDown network for sandbox \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\" successfully" Jan 30 13:52:36.465514 containerd[1453]: time="2025-01-30T13:52:36.465508492Z" level=info msg="StopPodSandbox for \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\" returns successfully" Jan 30 13:52:36.466063 containerd[1453]: time="2025-01-30T13:52:36.465995936Z" level=info msg="RemovePodSandbox for \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\"" Jan 30 13:52:36.466063 containerd[1453]: time="2025-01-30T13:52:36.466034127Z" level=info msg="Forcibly stopping sandbox \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\"" Jan 30 13:52:36.548172 containerd[1453]: 2025-01-30 13:52:36.515 [WARNING][5506] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lzkrl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"671740aa-5720-4131-9f78-6538b2c8e710", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34292eea8ede4e4dc8dfbd155aef6d0ba6163b2ad9a3ae589473ad04a58e2e1a", Pod:"csi-node-driver-lzkrl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e02888b025", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:36.548172 containerd[1453]: 2025-01-30 13:52:36.515 [INFO][5506] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:36.548172 containerd[1453]: 2025-01-30 13:52:36.515 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" iface="eth0" netns="" Jan 30 13:52:36.548172 containerd[1453]: 2025-01-30 13:52:36.515 [INFO][5506] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:36.548172 containerd[1453]: 2025-01-30 13:52:36.515 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:36.548172 containerd[1453]: 2025-01-30 13:52:36.537 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" HandleID="k8s-pod-network.f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Workload="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:36.548172 containerd[1453]: 2025-01-30 13:52:36.537 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:36.548172 containerd[1453]: 2025-01-30 13:52:36.537 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:36.548172 containerd[1453]: 2025-01-30 13:52:36.541 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" HandleID="k8s-pod-network.f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Workload="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:36.548172 containerd[1453]: 2025-01-30 13:52:36.541 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" HandleID="k8s-pod-network.f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Workload="localhost-k8s-csi--node--driver--lzkrl-eth0" Jan 30 13:52:36.548172 containerd[1453]: 2025-01-30 13:52:36.542 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:36.548172 containerd[1453]: 2025-01-30 13:52:36.545 [INFO][5506] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a" Jan 30 13:52:36.548639 containerd[1453]: time="2025-01-30T13:52:36.548207286Z" level=info msg="TearDown network for sandbox \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\" successfully" Jan 30 13:52:36.646939 containerd[1453]: time="2025-01-30T13:52:36.646870575Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:36.646939 containerd[1453]: time="2025-01-30T13:52:36.646936379Z" level=info msg="RemovePodSandbox \"f1ab6ee5c0c670a9b935a80b43c99dc2bd6115d52701f98fda6c2b369ef8329a\" returns successfully" Jan 30 13:52:39.132507 systemd[1]: Started sshd@17-10.0.0.119:22-10.0.0.1:58186.service - OpenSSH per-connection server daemon (10.0.0.1:58186). Jan 30 13:52:39.171331 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 58186 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:39.173112 sshd[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:39.176998 systemd-logind[1441]: New session 18 of user core. Jan 30 13:52:39.184894 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:52:39.306029 sshd[5523]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:39.317635 systemd[1]: sshd@17-10.0.0.119:22-10.0.0.1:58186.service: Deactivated successfully. Jan 30 13:52:39.319385 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:52:39.321302 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:52:39.322720 systemd[1]: Started sshd@18-10.0.0.119:22-10.0.0.1:58202.service - OpenSSH per-connection server daemon (10.0.0.1:58202). Jan 30 13:52:39.323986 systemd-logind[1441]: Removed session 18. Jan 30 13:52:39.367924 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 58202 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:39.369347 sshd[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:39.373145 systemd-logind[1441]: New session 19 of user core. Jan 30 13:52:39.381886 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:52:39.564366 sshd[5537]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:39.574835 systemd[1]: sshd@18-10.0.0.119:22-10.0.0.1:58202.service: Deactivated successfully. Jan 30 13:52:39.576827 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:52:39.578628 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:52:39.588262 systemd[1]: Started sshd@19-10.0.0.119:22-10.0.0.1:58210.service - OpenSSH per-connection server daemon (10.0.0.1:58210). Jan 30 13:52:39.589338 systemd-logind[1441]: Removed session 19. Jan 30 13:52:39.618593 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 58210 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:39.620145 sshd[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:39.623900 systemd-logind[1441]: New session 20 of user core. Jan 30 13:52:39.634882 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:52:41.202835 sshd[5549]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:41.213053 systemd[1]: sshd@19-10.0.0.119:22-10.0.0.1:58210.service: Deactivated successfully. Jan 30 13:52:41.217921 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:52:41.219598 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:52:41.226018 systemd[1]: Started sshd@20-10.0.0.119:22-10.0.0.1:58212.service - OpenSSH per-connection server daemon (10.0.0.1:58212). Jan 30 13:52:41.227397 systemd-logind[1441]: Removed session 20. Jan 30 13:52:41.274345 sshd[5568]: Accepted publickey for core from 10.0.0.1 port 58212 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:41.276011 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:41.280221 systemd-logind[1441]: New session 21 of user core. Jan 30 13:52:41.293899 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:52:41.511696 sshd[5568]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:41.518742 systemd[1]: sshd@20-10.0.0.119:22-10.0.0.1:58212.service: Deactivated successfully. Jan 30 13:52:41.520685 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:52:41.522410 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:52:41.527003 systemd[1]: Started sshd@21-10.0.0.119:22-10.0.0.1:58218.service - OpenSSH per-connection server daemon (10.0.0.1:58218). Jan 30 13:52:41.527839 systemd-logind[1441]: Removed session 21. Jan 30 13:52:41.557669 sshd[5581]: Accepted publickey for core from 10.0.0.1 port 58218 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:41.559333 sshd[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:41.563586 systemd-logind[1441]: New session 22 of user core. Jan 30 13:52:41.574901 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:52:41.744640 sshd[5581]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:41.747944 systemd[1]: sshd@21-10.0.0.119:22-10.0.0.1:58218.service: Deactivated successfully. Jan 30 13:52:41.750080 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:52:41.750715 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:52:41.751529 systemd-logind[1441]: Removed session 22. Jan 30 13:52:46.755730 systemd[1]: Started sshd@22-10.0.0.119:22-10.0.0.1:58234.service - OpenSSH per-connection server daemon (10.0.0.1:58234). Jan 30 13:52:46.791300 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 58234 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:46.792960 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:46.796974 systemd-logind[1441]: New session 23 of user core. Jan 30 13:52:46.807020 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:52:46.917539 sshd[5601]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:46.922369 systemd[1]: sshd@22-10.0.0.119:22-10.0.0.1:58234.service: Deactivated successfully. Jan 30 13:52:46.924849 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:52:46.925595 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:52:46.926701 systemd-logind[1441]: Removed session 23. Jan 30 13:52:51.928997 systemd[1]: Started sshd@23-10.0.0.119:22-10.0.0.1:39904.service - OpenSSH per-connection server daemon (10.0.0.1:39904). Jan 30 13:52:51.964954 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 39904 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:51.966831 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:51.971416 systemd-logind[1441]: New session 24 of user core. Jan 30 13:52:51.981955 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:52:52.096903 sshd[5621]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:52.101314 systemd[1]: sshd@23-10.0.0.119:22-10.0.0.1:39904.service: Deactivated successfully. Jan 30 13:52:52.103391 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:52:52.104235 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:52:52.105100 systemd-logind[1441]: Removed session 24. Jan 30 13:52:55.611746 kubelet[2510]: E0130 13:52:55.611697 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:57.109103 systemd[1]: Started sshd@24-10.0.0.119:22-10.0.0.1:39910.service - OpenSSH per-connection server daemon (10.0.0.1:39910). Jan 30 13:52:57.126286 kubelet[2510]: I0130 13:52:57.126245 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:52:57.687366 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 39910 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:52:57.688921 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:57.692658 systemd-logind[1441]: New session 25 of user core. Jan 30 13:52:57.701883 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:52:57.958308 sshd[5655]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:57.962817 systemd[1]: sshd@24-10.0.0.119:22-10.0.0.1:39910.service: Deactivated successfully. Jan 30 13:52:57.964903 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:52:57.965715 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:52:57.966653 systemd-logind[1441]: Removed session 25. Jan 30 13:53:02.975729 systemd[1]: Started sshd@25-10.0.0.119:22-10.0.0.1:51284.service - OpenSSH per-connection server daemon (10.0.0.1:51284). Jan 30 13:53:03.010287 sshd[5696]: Accepted publickey for core from 10.0.0.1 port 51284 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:53:03.012074 sshd[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:03.016450 systemd-logind[1441]: New session 26 of user core. Jan 30 13:53:03.031905 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:53:03.156753 sshd[5696]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:03.161746 systemd[1]: sshd@25-10.0.0.119:22-10.0.0.1:51284.service: Deactivated successfully. Jan 30 13:53:03.164607 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:53:03.165664 systemd-logind[1441]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:53:03.166863 systemd-logind[1441]: Removed session 26. Jan 30 13:53:03.610648 kubelet[2510]: E0130 13:53:03.610597 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"