Jan 17 12:08:57.913457 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:08:57.913478 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:08:57.913489 kernel: BIOS-provided physical RAM map: Jan 17 12:08:57.913495 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 12:08:57.913501 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 12:08:57.913507 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 12:08:57.913515 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 12:08:57.913521 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 12:08:57.913527 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 12:08:57.913533 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 12:08:57.913542 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 12:08:57.913548 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 12:08:57.913554 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 12:08:57.913560 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 12:08:57.913568 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 12:08:57.913583 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 12:08:57.913603 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 12:08:57.913610 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 12:08:57.913617 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 12:08:57.913623 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 12:08:57.913630 kernel: NX (Execute Disable) protection: active Jan 17 12:08:57.913637 kernel: APIC: Static calls initialized Jan 17 12:08:57.913643 kernel: efi: EFI v2.7 by EDK II Jan 17 12:08:57.913650 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 17 12:08:57.913657 kernel: SMBIOS 2.8 present. Jan 17 12:08:57.913664 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 12:08:57.913670 kernel: Hypervisor detected: KVM Jan 17 12:08:57.913679 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:08:57.913686 kernel: kvm-clock: using sched offset of 4313724506 cycles Jan 17 12:08:57.913693 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:08:57.913700 kernel: tsc: Detected 2794.748 MHz processor Jan 17 12:08:57.913708 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:08:57.913715 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:08:57.913722 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 12:08:57.913729 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 12:08:57.913736 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:08:57.913745 kernel: Using GB pages for direct mapping Jan 17 12:08:57.913752 kernel: Secure boot disabled Jan 17 12:08:57.913759 kernel: ACPI: Early table checksum verification disabled Jan 17 12:08:57.913766 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 12:08:57.913777 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 12:08:57.913784 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:08:57.913791 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:08:57.913800 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 12:08:57.913808 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:08:57.913815 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:08:57.913822 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:08:57.913829 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:08:57.913836 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 12:08:57.913843 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 12:08:57.913853 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 17 12:08:57.913860 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 12:08:57.913867 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 12:08:57.913874 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 12:08:57.913881 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 12:08:57.913889 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 12:08:57.913896 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 12:08:57.913903 kernel: No NUMA configuration found Jan 17 12:08:57.913910 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 12:08:57.913919 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 12:08:57.913927 kernel: Zone ranges: Jan 17 12:08:57.913934 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:08:57.913941 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 12:08:57.913948 kernel: Normal empty Jan 17 12:08:57.913955 kernel: Movable zone start for each node Jan 17 12:08:57.913962 kernel: Early memory node ranges Jan 17 12:08:57.913969 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 12:08:57.913976 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 12:08:57.913983 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 12:08:57.913992 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 12:08:57.914000 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 12:08:57.914006 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 12:08:57.914014 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 12:08:57.914021 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:08:57.914028 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 12:08:57.914035 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 12:08:57.914042 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:08:57.914049 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 12:08:57.914058 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 12:08:57.914065 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 12:08:57.914072 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:08:57.914079 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:08:57.914087 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:08:57.914094 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:08:57.914101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:08:57.914108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:08:57.914115 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:08:57.914122 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:08:57.914131 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:08:57.914139 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:08:57.914146 kernel: TSC deadline timer available Jan 17 12:08:57.914153 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 12:08:57.914160 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:08:57.914167 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 12:08:57.914174 kernel: kvm-guest: setup PV sched yield Jan 17 12:08:57.914181 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 12:08:57.914188 kernel: Booting paravirtualized kernel on KVM Jan 17 12:08:57.914197 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:08:57.914205 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 12:08:57.914212 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 17 12:08:57.914219 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 17 12:08:57.914226 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 12:08:57.914233 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:08:57.914240 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:08:57.914248 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:08:57.914258 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:08:57.914265 kernel: random: crng init done Jan 17 12:08:57.914273 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:08:57.914280 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:08:57.914287 kernel: Fallback order for Node 0: 0 Jan 17 12:08:57.914294 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 12:08:57.914301 kernel: Policy zone: DMA32 Jan 17 12:08:57.914308 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:08:57.914316 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 171124K reserved, 0K cma-reserved) Jan 17 12:08:57.914326 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 12:08:57.914333 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:08:57.914340 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:08:57.914347 kernel: Dynamic Preempt: voluntary Jan 17 12:08:57.914361 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:08:57.914371 kernel: rcu: RCU event tracing is enabled. Jan 17 12:08:57.914379 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 12:08:57.914387 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:08:57.914394 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:08:57.914402 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:08:57.914409 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:08:57.914417 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 12:08:57.914427 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 12:08:57.914434 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:08:57.914442 kernel: Console: colour dummy device 80x25 Jan 17 12:08:57.914449 kernel: printk: console [ttyS0] enabled Jan 17 12:08:57.914457 kernel: ACPI: Core revision 20230628 Jan 17 12:08:57.914466 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:08:57.914474 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:08:57.914481 kernel: x2apic enabled Jan 17 12:08:57.914489 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:08:57.914497 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 12:08:57.914505 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 12:08:57.914514 kernel: kvm-guest: setup PV IPIs Jan 17 12:08:57.914522 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:08:57.914531 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:08:57.914541 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 17 12:08:57.914549 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 12:08:57.914556 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 12:08:57.914564 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 12:08:57.914579 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:08:57.914597 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:08:57.914607 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:08:57.914616 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:08:57.914626 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 17 12:08:57.914638 kernel: RETBleed: Mitigation: untrained return thunk Jan 17 12:08:57.914646 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:08:57.914653 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:08:57.914661 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 12:08:57.914669 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 12:08:57.914677 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 12:08:57.914684 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:08:57.914692 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:08:57.914701 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:08:57.914709 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:08:57.914717 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:08:57.914724 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:08:57.914731 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:08:57.914739 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:08:57.914747 kernel: landlock: Up and running. Jan 17 12:08:57.914754 kernel: SELinux: Initializing. Jan 17 12:08:57.914761 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:08:57.914771 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:08:57.914779 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 17 12:08:57.914786 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:08:57.914794 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:08:57.914802 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:08:57.914809 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 12:08:57.914816 kernel: ... version: 0 Jan 17 12:08:57.914824 kernel: ... bit width: 48 Jan 17 12:08:57.914831 kernel: ... generic registers: 6 Jan 17 12:08:57.914841 kernel: ... value mask: 0000ffffffffffff Jan 17 12:08:57.914848 kernel: ... max period: 00007fffffffffff Jan 17 12:08:57.914855 kernel: ... fixed-purpose events: 0 Jan 17 12:08:57.914863 kernel: ... event mask: 000000000000003f Jan 17 12:08:57.914870 kernel: signal: max sigframe size: 1776 Jan 17 12:08:57.914878 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:08:57.914885 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:08:57.914893 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:08:57.914900 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:08:57.914910 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 12:08:57.914917 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 12:08:57.914924 kernel: smpboot: Max logical packages: 1 Jan 17 12:08:57.914932 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 17 12:08:57.914939 kernel: devtmpfs: initialized Jan 17 12:08:57.914947 kernel: x86/mm: Memory block size: 128MB Jan 17 12:08:57.914954 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 12:08:57.914962 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 12:08:57.914970 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 12:08:57.914980 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 12:08:57.914987 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 12:08:57.914995 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:08:57.915002 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 12:08:57.915010 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:08:57.915017 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:08:57.915025 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:08:57.915032 kernel: audit: type=2000 audit(1737115736.870:1): state=initialized audit_enabled=0 res=1 Jan 17 12:08:57.915040 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:08:57.915049 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:08:57.915057 kernel: cpuidle: using governor menu Jan 17 12:08:57.915064 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:08:57.915072 kernel: dca service started, version 1.12.1 Jan 17 12:08:57.915079 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 12:08:57.915087 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 12:08:57.915094 kernel: PCI: Using configuration type 1 for base access Jan 17 12:08:57.915102 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:08:57.915109 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:08:57.915119 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:08:57.915126 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:08:57.915134 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:08:57.915141 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:08:57.915149 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:08:57.915156 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:08:57.915164 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:08:57.915171 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:08:57.915179 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:08:57.915188 kernel: ACPI: Interpreter enabled Jan 17 12:08:57.915196 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:08:57.915203 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:08:57.915211 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:08:57.915218 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:08:57.915226 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 12:08:57.915233 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:08:57.915407 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:08:57.915538 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 12:08:57.915688 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 12:08:57.915702 kernel: PCI host bridge to bus 0000:00 Jan 17 12:08:57.915834 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:08:57.915947 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:08:57.916059 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:08:57.916169 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 12:08:57.916285 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:08:57.916406 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 12:08:57.916522 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:08:57.916802 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 12:08:57.916934 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 12:08:57.917058 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 12:08:57.917184 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 12:08:57.917302 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 12:08:57.917423 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 12:08:57.917543 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:08:57.917696 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:08:57.917819 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 12:08:57.917938 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 12:08:57.918062 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 12:08:57.918190 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:08:57.918309 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 12:08:57.918429 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 12:08:57.918549 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 12:08:57.918712 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:08:57.918838 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 12:08:57.918958 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 12:08:57.919079 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 12:08:57.919200 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 12:08:57.919328 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 12:08:57.919457 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 12:08:57.919619 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 12:08:57.919748 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 12:08:57.919867 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 12:08:57.919996 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 12:08:57.920116 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 12:08:57.920126 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:08:57.920134 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:08:57.920141 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:08:57.920149 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:08:57.920160 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 12:08:57.920168 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 12:08:57.920175 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 12:08:57.920183 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 12:08:57.920190 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 12:08:57.920198 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 12:08:57.920205 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 12:08:57.920213 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 12:08:57.920220 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 12:08:57.920230 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 12:08:57.920237 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 12:08:57.920245 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 12:08:57.920252 kernel: iommu: Default domain type: Translated Jan 17 12:08:57.920260 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:08:57.920267 kernel: efivars: Registered efivars operations Jan 17 12:08:57.920275 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:08:57.920283 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:08:57.920290 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 12:08:57.920300 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 12:08:57.920307 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 12:08:57.920315 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 12:08:57.920435 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 12:08:57.920553 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 12:08:57.920695 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:08:57.920706 kernel: vgaarb: loaded Jan 17 12:08:57.920714 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:08:57.920721 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:08:57.920733 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:08:57.920740 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:08:57.920748 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:08:57.920756 kernel: pnp: PnP ACPI init Jan 17 12:08:57.920889 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 12:08:57.920900 kernel: pnp: PnP ACPI: found 6 devices Jan 17 12:08:57.920908 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:08:57.920915 kernel: NET: Registered PF_INET protocol family Jan 17 12:08:57.920926 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:08:57.920934 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:08:57.920941 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:08:57.920949 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:08:57.920956 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:08:57.920964 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:08:57.920971 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:08:57.920979 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:08:57.920987 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:08:57.920996 kernel: NET: Registered PF_XDP protocol family Jan 17 12:08:57.921160 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 12:08:57.921356 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 12:08:57.921476 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:08:57.921624 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:08:57.921735 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:08:57.921844 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 12:08:57.921960 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 12:08:57.922072 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 12:08:57.922082 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:08:57.922089 kernel: Initialise system trusted keyrings Jan 17 12:08:57.922097 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:08:57.922105 kernel: Key type asymmetric registered Jan 17 12:08:57.922112 kernel: Asymmetric key parser 'x509' registered Jan 17 12:08:57.922120 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:08:57.922127 kernel: io scheduler mq-deadline registered Jan 17 12:08:57.922138 kernel: io scheduler kyber registered Jan 17 12:08:57.922145 kernel: io scheduler bfq registered Jan 17 12:08:57.922153 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:08:57.922161 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 12:08:57.922168 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 12:08:57.922176 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 12:08:57.922183 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:08:57.922191 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:08:57.922199 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:08:57.922208 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:08:57.922216 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:08:57.922224 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:08:57.922349 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 12:08:57.922466 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 12:08:57.922611 kernel: rtc_cmos 00:04: setting system clock to 2025-01-17T12:08:57 UTC (1737115737) Jan 17 12:08:57.922726 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 12:08:57.922736 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:08:57.922747 kernel: efifb: probing for efifb Jan 17 12:08:57.922755 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 12:08:57.922763 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 12:08:57.922771 kernel: efifb: scrolling: redraw Jan 17 12:08:57.922778 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 12:08:57.922786 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 12:08:57.922811 kernel: fb0: EFI VGA frame buffer device Jan 17 12:08:57.922821 kernel: pstore: Using crash dump compression: deflate Jan 17 12:08:57.922829 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:08:57.922839 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:08:57.922847 kernel: Segment Routing with IPv6 Jan 17 12:08:57.922855 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:08:57.922863 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:08:57.922871 kernel: Key type dns_resolver registered Jan 17 12:08:57.922878 kernel: IPI shorthand broadcast: enabled Jan 17 12:08:57.922886 kernel: sched_clock: Marking stable (627004391, 175676859)->(873036415, -70355165) Jan 17 12:08:57.922894 kernel: registered taskstats version 1 Jan 17 12:08:57.922902 kernel: Loading compiled-in X.509 certificates Jan 17 12:08:57.922912 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:08:57.922920 kernel: Key type .fscrypt registered Jan 17 12:08:57.922927 kernel: Key type fscrypt-provisioning registered Jan 17 12:08:57.922935 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:08:57.922943 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:08:57.922953 kernel: ima: No architecture policies found Jan 17 12:08:57.922961 kernel: clk: Disabling unused clocks Jan 17 12:08:57.922969 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:08:57.922977 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:08:57.922987 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:08:57.922994 kernel: Run /init as init process Jan 17 12:08:57.923002 kernel: with arguments: Jan 17 12:08:57.923010 kernel: /init Jan 17 12:08:57.923017 kernel: with environment: Jan 17 12:08:57.923025 kernel: HOME=/ Jan 17 12:08:57.923033 kernel: TERM=linux Jan 17 12:08:57.923041 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:08:57.923050 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:08:57.923063 systemd[1]: Detected virtualization kvm. Jan 17 12:08:57.923071 systemd[1]: Detected architecture x86-64. Jan 17 12:08:57.923079 systemd[1]: Running in initrd. Jan 17 12:08:57.923089 systemd[1]: No hostname configured, using default hostname. Jan 17 12:08:57.923099 systemd[1]: Hostname set to . Jan 17 12:08:57.923108 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:08:57.923116 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:08:57.923125 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:08:57.923133 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:08:57.923142 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:08:57.923150 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:08:57.923159 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:08:57.923170 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:08:57.923180 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:08:57.923188 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:08:57.923196 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:08:57.923205 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:08:57.923213 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:08:57.923221 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:08:57.923232 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:08:57.923240 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:08:57.923248 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:08:57.923256 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:08:57.923264 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:08:57.923273 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:08:57.923281 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:08:57.923289 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:08:57.923300 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:08:57.923308 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:08:57.923317 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:08:57.923326 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:08:57.923334 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:08:57.923342 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:08:57.923350 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:08:57.923359 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:08:57.923367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:08:57.923377 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:08:57.923386 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:08:57.923394 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:08:57.923420 systemd-journald[193]: Collecting audit messages is disabled. Jan 17 12:08:57.923442 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:08:57.923451 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:08:57.923459 systemd-journald[193]: Journal started Jan 17 12:08:57.923479 systemd-journald[193]: Runtime Journal (/run/log/journal/a3cc3629f7664a529c1e91b17a0ec5d0) is 6.0M, max 48.3M, 42.2M free. Jan 17 12:08:57.926122 systemd-modules-load[194]: Inserted module 'overlay' Jan 17 12:08:57.928073 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:08:57.926822 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:08:57.932290 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:08:57.933746 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:08:57.941756 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:08:57.949492 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:08:57.956756 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:08:57.961535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:08:57.965768 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:08:57.969330 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 17 12:08:57.970377 kernel: Bridge firewalling registered Jan 17 12:08:57.970816 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:08:57.972517 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:08:57.975850 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:08:57.982363 dracut-cmdline[221]: dracut-dracut-053 Jan 17 12:08:57.990965 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:08:58.004413 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:08:58.013808 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:08:58.064954 systemd-resolved[263]: Positive Trust Anchors: Jan 17 12:08:58.077608 kernel: SCSI subsystem initialized Jan 17 12:08:58.064978 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:08:58.065016 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:08:58.067884 systemd-resolved[263]: Defaulting to hostname 'linux'. Jan 17 12:08:58.068954 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:08:58.088741 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:08:58.136612 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:08:58.173620 kernel: iscsi: registered transport (tcp) Jan 17 12:08:58.200673 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:08:58.200791 kernel: QLogic iSCSI HBA Driver Jan 17 12:08:58.251130 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:08:58.262831 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:08:58.290510 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:08:58.290611 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:08:58.290626 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:08:58.339641 kernel: raid6: avx2x4 gen() 21826 MB/s Jan 17 12:08:58.356627 kernel: raid6: avx2x2 gen() 22069 MB/s Jan 17 12:08:58.373938 kernel: raid6: avx2x1 gen() 18555 MB/s Jan 17 12:08:58.374035 kernel: raid6: using algorithm avx2x2 gen() 22069 MB/s Jan 17 12:08:58.397205 kernel: raid6: .... xor() 14579 MB/s, rmw enabled Jan 17 12:08:58.397309 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:08:58.423633 kernel: xor: automatically using best checksumming function avx Jan 17 12:08:58.616630 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:08:58.631671 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:08:58.652843 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:08:58.670771 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 17 12:08:58.677453 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:08:58.690782 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:08:58.709869 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jan 17 12:08:58.748599 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:08:58.753938 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:08:58.832755 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:08:58.845761 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:08:58.860965 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 12:08:58.897487 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 12:08:58.897665 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:08:58.897678 kernel: GPT:9289727 != 19775487 Jan 17 12:08:58.897689 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:08:58.897699 kernel: GPT:9289727 != 19775487 Jan 17 12:08:58.897709 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:08:58.897719 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:08:58.897736 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:08:58.862770 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:08:58.900086 kernel: libata version 3.00 loaded. Jan 17 12:08:58.865176 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:08:58.868809 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:08:58.870654 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:08:58.881784 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:08:58.913898 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:08:58.913932 kernel: AES CTR mode by8 optimization enabled Jan 17 12:08:58.898325 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:08:58.898445 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:08:58.901620 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:08:58.906669 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:08:58.922920 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (469) Jan 17 12:08:58.922947 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 12:08:58.950009 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 12:08:58.950033 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (459) Jan 17 12:08:58.950045 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 12:08:58.950193 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 12:08:58.950325 kernel: scsi host0: ahci Jan 17 12:08:58.950468 kernel: scsi host1: ahci Jan 17 12:08:58.950666 kernel: scsi host2: ahci Jan 17 12:08:58.950824 kernel: scsi host3: ahci Jan 17 12:08:58.950960 kernel: scsi host4: ahci Jan 17 12:08:58.951096 kernel: scsi host5: ahci Jan 17 12:08:58.951231 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 12:08:58.951242 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 12:08:58.951252 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 12:08:58.951263 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 12:08:58.951273 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 12:08:58.951288 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 12:08:58.906828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:08:58.909251 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:08:58.924009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:08:58.927919 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:08:58.943362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:08:58.975198 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:08:58.989035 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:08:58.993778 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:08:59.003933 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:08:59.012928 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:08:59.023812 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:08:59.026453 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:08:59.027634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:08:59.029203 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:08:59.032338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:08:59.049656 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:08:59.065798 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:08:59.083019 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:08:59.142091 disk-uuid[554]: Primary Header is updated. Jan 17 12:08:59.142091 disk-uuid[554]: Secondary Entries is updated. Jan 17 12:08:59.142091 disk-uuid[554]: Secondary Header is updated. Jan 17 12:08:59.145745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:08:59.153624 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:08:59.259767 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 12:08:59.259867 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:08:59.259881 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 12:08:59.259893 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 12:08:59.260615 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:08:59.261618 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 12:08:59.262615 kernel: ata3.00: applying bridge limits Jan 17 12:08:59.262627 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:08:59.263618 kernel: ata3.00: configured for UDMA/100 Jan 17 12:08:59.264619 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 12:08:59.309638 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 12:08:59.322788 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:08:59.322810 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:09:00.158530 disk-uuid[569]: The operation has completed successfully. Jan 17 12:09:00.159827 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:09:00.189696 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:09:00.189823 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:09:00.220904 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:09:00.226890 sh[598]: Success Jan 17 12:09:00.240644 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 12:09:00.279755 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:09:00.305626 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:09:00.310766 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:09:00.322324 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:09:00.322357 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:09:00.322369 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:09:00.323621 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:09:00.325242 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:09:00.329357 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:09:00.331798 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:09:00.343743 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:09:00.346937 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:09:00.356764 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:09:00.356806 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:09:00.356817 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:09:00.360922 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:09:00.370426 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:09:00.372158 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:09:00.458427 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:09:00.466824 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:09:00.469282 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:09:00.472775 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:09:00.488826 systemd-networkd[776]: lo: Link UP Jan 17 12:09:00.488837 systemd-networkd[776]: lo: Gained carrier Jan 17 12:09:00.490381 systemd-networkd[776]: Enumeration completed Jan 17 12:09:00.490723 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:09:00.491179 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:00.491183 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:09:00.492021 systemd-networkd[776]: eth0: Link UP Jan 17 12:09:00.492025 systemd-networkd[776]: eth0: Gained carrier Jan 17 12:09:00.492031 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:00.493131 systemd[1]: Reached target network.target - Network. Jan 17 12:09:00.529714 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:09:00.561606 ignition[779]: Ignition 2.19.0 Jan 17 12:09:00.561619 ignition[779]: Stage: fetch-offline Jan 17 12:09:00.561663 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:00.561674 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:09:00.561764 ignition[779]: parsed url from cmdline: "" Jan 17 12:09:00.561768 ignition[779]: no config URL provided Jan 17 12:09:00.561773 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:09:00.561783 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:09:00.561811 ignition[779]: op(1): [started] loading QEMU firmware config module Jan 17 12:09:00.561816 ignition[779]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 12:09:00.572726 ignition[779]: op(1): [finished] loading QEMU firmware config module Jan 17 12:09:00.612075 ignition[779]: parsing config with SHA512: 7f6ca17194e682115b504430c0d54d9fa26406000d78afa8c878c48f124416e28a741cd73c850d6b35b4ba5b8b6e75c858220bb66d2be617323e7da597696f74 Jan 17 12:09:00.616956 unknown[779]: fetched base config from "system" Jan 17 12:09:00.617059 unknown[779]: fetched user config from "qemu" Jan 17 12:09:00.617411 ignition[779]: fetch-offline: fetch-offline passed Jan 17 12:09:00.617470 ignition[779]: Ignition finished successfully Jan 17 12:09:00.619913 systemd-resolved[263]: Detected conflict on linux IN A 10.0.0.49 Jan 17 12:09:00.619922 systemd-resolved[263]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jan 17 12:09:00.622155 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:09:00.624877 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:09:00.647899 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:09:00.663122 ignition[790]: Ignition 2.19.0 Jan 17 12:09:00.663133 ignition[790]: Stage: kargs Jan 17 12:09:00.663304 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:00.663316 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:09:00.664176 ignition[790]: kargs: kargs passed Jan 17 12:09:00.668073 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:09:00.664232 ignition[790]: Ignition finished successfully Jan 17 12:09:00.676775 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:09:00.687524 ignition[799]: Ignition 2.19.0 Jan 17 12:09:00.687535 ignition[799]: Stage: disks Jan 17 12:09:00.687716 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:00.687728 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:09:00.690718 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:09:00.688561 ignition[799]: disks: disks passed Jan 17 12:09:00.692526 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:09:00.688619 ignition[799]: Ignition finished successfully Jan 17 12:09:00.694532 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:09:00.696796 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:09:00.699263 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:09:00.701490 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:09:00.709748 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:09:00.743952 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:09:00.915323 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:09:00.921727 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:09:01.026638 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:09:01.027529 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:09:01.029172 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:09:01.050813 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:09:01.054005 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:09:01.054865 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:09:01.054920 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:09:01.054955 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:09:01.065299 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:09:01.071843 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:09:01.077687 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (817) Jan 17 12:09:01.077738 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:09:01.077749 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:09:01.079770 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:09:01.082619 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:09:01.084418 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:09:01.109479 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:09:01.115836 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:09:01.121343 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:09:01.126867 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:09:01.224332 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:09:01.232707 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:09:01.234875 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:09:01.241637 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:09:01.262879 ignition[930]: INFO : Ignition 2.19.0 Jan 17 12:09:01.262879 ignition[930]: INFO : Stage: mount Jan 17 12:09:01.264791 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:01.264791 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:09:01.264523 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:09:01.269690 ignition[930]: INFO : mount: mount passed Jan 17 12:09:01.270521 ignition[930]: INFO : Ignition finished successfully Jan 17 12:09:01.274059 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:09:01.288853 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:09:01.321004 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:09:01.341821 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:09:01.348620 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (945) Jan 17 12:09:01.350657 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:09:01.350682 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:09:01.350696 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:09:01.354611 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:09:01.355532 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:09:01.389893 ignition[962]: INFO : Ignition 2.19.0 Jan 17 12:09:01.389893 ignition[962]: INFO : Stage: files Jan 17 12:09:01.391802 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:01.391802 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:09:01.391802 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:09:01.396011 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:09:01.396011 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:09:01.400887 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:09:01.402506 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:09:01.404399 unknown[962]: wrote ssh authorized keys file for user: core Jan 17 12:09:01.405736 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:09:01.407419 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:09:01.407419 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:09:01.445876 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:09:01.520701 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:09:01.522983 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:09:01.524931 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:09:01.526812 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:09:01.528823 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:09:01.530713 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:09:01.532726 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:09:01.534698 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:09:01.536529 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:09:01.538399 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:09:01.540229 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:09:01.541962 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:09:01.544731 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:09:01.547600 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:09:01.550043 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 17 12:09:01.697835 systemd-networkd[776]: eth0: Gained IPv6LL Jan 17 12:09:02.050605 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:09:02.626183 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:09:02.626183 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:09:02.629963 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:09:02.632205 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:09:02.632205 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:09:02.632205 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 17 12:09:02.636535 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:09:02.638617 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:09:02.638617 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 17 12:09:02.641827 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:09:02.671048 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:09:02.677167 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:09:02.678888 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:09:02.678888 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:09:02.678888 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:09:02.678888 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:09:02.678888 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:09:02.678888 ignition[962]: INFO : files: files passed Jan 17 12:09:02.678888 ignition[962]: INFO : Ignition finished successfully Jan 17 12:09:02.681679 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:09:02.692812 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:09:02.696855 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:09:02.700604 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:09:02.700795 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:09:02.709078 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 12:09:02.712855 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:02.712855 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:02.716231 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:02.718488 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:09:02.721954 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:09:02.729782 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:09:02.758059 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:09:02.758205 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:09:02.759053 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:09:02.761997 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:09:02.762394 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:09:02.763363 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:09:02.783290 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:09:02.786332 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:09:02.799532 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:09:02.800992 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:09:02.803379 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:09:02.805753 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:09:02.805930 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:09:02.808302 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:09:02.810206 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:09:02.812474 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:09:02.814761 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:09:02.817003 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:09:02.819444 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:09:02.821830 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:09:02.824390 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:09:02.826634 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:09:02.829066 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:09:02.831060 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:09:02.831400 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:09:02.833364 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:09:02.834993 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:09:02.837112 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:09:02.837261 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:09:02.839411 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:09:02.839538 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:09:02.841880 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:09:02.841991 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:09:02.844198 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:09:02.845995 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:09:02.846204 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:09:02.848914 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:09:02.850908 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:09:02.853060 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:09:02.853206 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:09:02.855297 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:09:02.855421 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:09:02.857773 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:09:02.857981 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:09:02.859886 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:09:02.859996 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:09:02.872865 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:09:02.875507 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:09:02.876656 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:09:02.876848 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:09:02.879342 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:09:02.879501 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:09:02.885705 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:09:02.885888 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:09:02.896242 ignition[1017]: INFO : Ignition 2.19.0 Jan 17 12:09:02.896242 ignition[1017]: INFO : Stage: umount Jan 17 12:09:02.899769 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:02.899769 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:09:02.899769 ignition[1017]: INFO : umount: umount passed Jan 17 12:09:02.899769 ignition[1017]: INFO : Ignition finished successfully Jan 17 12:09:02.904625 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:09:02.906689 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:09:02.910018 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:09:02.912057 systemd[1]: Stopped target network.target - Network. Jan 17 12:09:02.913940 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:09:02.915003 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:09:02.917110 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:09:02.917169 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:09:02.921506 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:09:02.921601 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:09:02.924753 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:09:02.926010 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:09:02.928893 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:09:02.931668 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:09:02.937646 systemd-networkd[776]: eth0: DHCPv6 lease lost Jan 17 12:09:02.939711 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:09:02.940754 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:09:02.943290 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:09:02.944253 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:09:02.964799 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:09:02.965102 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:09:02.965178 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:09:02.967190 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:09:02.973316 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:09:02.973472 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:09:02.976513 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:09:02.976583 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:09:02.978817 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:09:02.978866 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:09:02.980989 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:09:02.981039 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:09:02.997087 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:09:02.997205 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:09:02.998982 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:09:02.999148 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:09:03.002160 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:09:03.002233 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:09:03.003494 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:09:03.003535 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:09:03.005469 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:09:03.005518 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:09:03.007738 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:09:03.007790 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:09:03.009701 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:09:03.009752 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:03.022728 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:09:03.029531 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:09:03.029600 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:09:03.031947 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:09:03.031995 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:09:03.034238 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:09:03.034285 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:09:03.036796 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:09:03.036846 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:03.039489 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:09:03.039604 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:09:03.099604 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:09:03.099742 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:09:03.101793 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:09:03.103454 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:09:03.103506 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:09:03.113736 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:09:03.121501 systemd[1]: Switching root. Jan 17 12:09:03.154023 systemd-journald[193]: Journal stopped Jan 17 12:09:04.277178 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 17 12:09:04.277247 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:09:04.277261 kernel: SELinux: policy capability open_perms=1 Jan 17 12:09:04.277272 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:09:04.277283 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:09:04.277294 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:09:04.277307 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:09:04.277326 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:09:04.277337 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:09:04.277353 kernel: audit: type=1403 audit(1737115743.513:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:09:04.277365 systemd[1]: Successfully loaded SELinux policy in 41.560ms. Jan 17 12:09:04.277384 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.769ms. Jan 17 12:09:04.277406 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:09:04.277418 systemd[1]: Detected virtualization kvm. Jan 17 12:09:04.277430 systemd[1]: Detected architecture x86-64. Jan 17 12:09:04.277445 systemd[1]: Detected first boot. Jan 17 12:09:04.277456 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:09:04.277468 zram_generator::config[1062]: No configuration found. Jan 17 12:09:04.277482 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:09:04.277494 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:09:04.277505 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:09:04.277517 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:09:04.277530 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:09:04.277542 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:09:04.277560 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:09:04.277572 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:09:04.277759 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:09:04.277778 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:09:04.277796 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:09:04.277808 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:09:04.277881 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:09:04.277894 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:09:04.277910 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:09:04.277921 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:09:04.277934 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:09:04.277951 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:09:04.277962 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:09:04.277975 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:09:04.277987 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:09:04.277999 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:09:04.278011 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:09:04.278025 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:09:04.278037 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:09:04.278049 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:09:04.278061 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:09:04.278072 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:09:04.278084 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:09:04.278096 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:09:04.278109 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:09:04.278124 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:09:04.278136 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:09:04.278148 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:09:04.278159 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:09:04.278171 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:09:04.278183 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:09:04.278195 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:04.278207 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:09:04.278218 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:09:04.278233 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:09:04.278246 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:09:04.278258 systemd[1]: Reached target machines.target - Containers. Jan 17 12:09:04.278269 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:09:04.278281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:09:04.278293 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:09:04.278305 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:09:04.278317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:09:04.278331 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:09:04.278343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:09:04.278354 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:09:04.278368 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:09:04.278380 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:09:04.278392 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:09:04.278414 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:09:04.278426 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:09:04.278447 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:09:04.278463 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:09:04.278490 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:09:04.278503 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:09:04.278515 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:09:04.278527 kernel: loop: module loaded Jan 17 12:09:04.278539 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:09:04.278551 kernel: fuse: init (API version 7.39) Jan 17 12:09:04.278570 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:09:04.278598 systemd[1]: Stopped verity-setup.service. Jan 17 12:09:04.278614 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:04.278640 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:09:04.278671 systemd-journald[1125]: Collecting audit messages is disabled. Jan 17 12:09:04.278697 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:09:04.278708 systemd-journald[1125]: Journal started Jan 17 12:09:04.278729 systemd-journald[1125]: Runtime Journal (/run/log/journal/a3cc3629f7664a529c1e91b17a0ec5d0) is 6.0M, max 48.3M, 42.2M free. Jan 17 12:09:04.054921 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:09:04.077036 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:09:04.077579 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:09:04.283209 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:09:04.283289 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:09:04.284302 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:09:04.286008 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:09:04.287901 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:09:04.289573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:09:04.291885 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:09:04.294314 kernel: ACPI: bus type drm_connector registered Jan 17 12:09:04.292254 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:09:04.295143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:09:04.295375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:09:04.297642 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:09:04.297872 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:09:04.299841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:09:04.300068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:09:04.302237 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:09:04.302599 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:09:04.304539 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:09:04.304794 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:09:04.306711 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:09:04.308895 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:09:04.311180 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:09:04.313380 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:09:04.331605 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:09:04.343677 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:09:04.346302 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:09:04.347683 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:09:04.347721 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:09:04.350320 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:09:04.353390 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:09:04.355897 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:09:04.357203 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:09:04.359739 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:09:04.362152 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:09:04.363442 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:09:04.367192 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:09:04.368730 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:09:04.370308 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:09:04.376683 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:09:04.380206 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:09:04.384063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:09:04.386809 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:09:04.398453 systemd-journald[1125]: Time spent on flushing to /var/log/journal/a3cc3629f7664a529c1e91b17a0ec5d0 is 17.991ms for 1001 entries. Jan 17 12:09:04.398453 systemd-journald[1125]: System Journal (/var/log/journal/a3cc3629f7664a529c1e91b17a0ec5d0) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:09:04.430609 systemd-journald[1125]: Received client request to flush runtime journal. Jan 17 12:09:04.430655 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 12:09:04.390791 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:09:04.392436 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:09:04.397932 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:09:04.403244 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:09:04.415747 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:09:04.421634 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:09:04.423543 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:09:04.435731 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:09:04.439464 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:09:04.443578 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 17 12:09:04.443610 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 17 12:09:04.444749 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:09:04.453856 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:09:04.465874 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:09:04.468176 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:09:04.469498 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:09:04.481626 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 12:09:04.505047 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:09:04.513643 kernel: loop2: detected capacity change from 0 to 210664 Jan 17 12:09:04.513866 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:09:04.535405 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Jan 17 12:09:04.535429 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Jan 17 12:09:04.545451 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:09:04.559625 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 12:09:04.569765 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 12:09:04.580772 kernel: loop5: detected capacity change from 0 to 210664 Jan 17 12:09:04.585501 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 12:09:04.587327 (sd-merge)[1203]: Merged extensions into '/usr'. Jan 17 12:09:04.591765 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:09:04.591785 systemd[1]: Reloading... Jan 17 12:09:04.650231 zram_generator::config[1229]: No configuration found. Jan 17 12:09:04.763763 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:09:04.792195 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:04.850711 systemd[1]: Reloading finished in 258 ms. Jan 17 12:09:04.884774 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:09:04.886519 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:09:04.900761 systemd[1]: Starting ensure-sysext.service... Jan 17 12:09:04.902827 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:09:04.910972 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:09:04.910984 systemd[1]: Reloading... Jan 17 12:09:04.932370 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:09:04.932792 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:09:04.934486 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:09:04.934931 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 17 12:09:04.935103 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 17 12:09:04.943012 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:09:04.943163 systemd-tmpfiles[1267]: Skipping /boot Jan 17 12:09:04.966945 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:09:04.966965 systemd-tmpfiles[1267]: Skipping /boot Jan 17 12:09:04.979046 zram_generator::config[1297]: No configuration found. Jan 17 12:09:05.082931 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:05.132741 systemd[1]: Reloading finished in 221 ms. Jan 17 12:09:05.150972 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:09:05.164013 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:09:05.173549 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:09:05.176202 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:09:05.178945 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:09:05.185991 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:09:05.189517 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:09:05.193799 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:09:05.198114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:05.198289 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:09:05.202608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:09:05.209690 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:09:05.213465 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:09:05.214924 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:09:05.218387 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:09:05.221547 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:05.223498 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:09:05.226156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:09:05.227106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:09:05.229158 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:09:05.229717 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:09:05.233161 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:09:05.233403 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:09:05.235871 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Jan 17 12:09:05.241909 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:09:05.242188 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:09:05.251575 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:09:05.255572 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:05.255822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:09:05.258072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:09:05.260660 augenrules[1363]: No rules Jan 17 12:09:05.279913 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:09:05.284798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:09:05.286610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:09:05.286725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:05.287553 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:09:05.289432 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:09:05.291216 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:09:05.294182 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:09:05.296054 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:09:05.297792 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:09:05.303107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:09:05.303297 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:09:05.305231 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:09:05.305428 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:09:05.307422 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:09:05.307633 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:09:05.327633 systemd[1]: Finished ensure-sysext.service. Jan 17 12:09:05.332497 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:05.332916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:09:05.340918 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:09:05.343418 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:09:05.349766 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:09:05.366848 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:09:05.368406 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:09:05.371420 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:09:05.378756 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:09:05.381811 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:09:05.381844 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:05.382515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:09:05.382728 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:09:05.386647 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1387) Jan 17 12:09:05.385696 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:09:05.385893 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:09:05.387553 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:09:05.387795 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:09:05.392002 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:09:05.392290 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:09:05.400561 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:09:05.405145 systemd-resolved[1337]: Positive Trust Anchors: Jan 17 12:09:05.405764 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:09:05.405817 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:09:05.408876 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:09:05.408932 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:09:05.411573 systemd-resolved[1337]: Defaulting to hostname 'linux'. Jan 17 12:09:05.416094 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:09:05.418806 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:09:05.442617 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:09:05.450627 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:09:05.459912 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 12:09:05.460829 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 12:09:05.461020 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 12:09:05.461216 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 12:09:05.467202 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:09:05.478397 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:09:05.486816 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:09:05.502741 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:09:05.504706 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:09:05.509687 systemd-networkd[1409]: lo: Link UP Jan 17 12:09:05.509700 systemd-networkd[1409]: lo: Gained carrier Jan 17 12:09:05.515173 systemd-networkd[1409]: Enumeration completed Jan 17 12:09:05.515275 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:09:05.515643 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:05.515648 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:09:05.516925 systemd[1]: Reached target network.target - Network. Jan 17 12:09:05.520232 systemd-networkd[1409]: eth0: Link UP Jan 17 12:09:05.520238 systemd-networkd[1409]: eth0: Gained carrier Jan 17 12:09:05.520260 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:05.530834 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:09:05.532666 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:09:05.540747 systemd-networkd[1409]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:09:05.541550 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Jan 17 12:09:05.542560 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 12:09:05.542728 systemd-timesyncd[1410]: Initial clock synchronization to Fri 2025-01-17 12:09:05.644198 UTC. Jan 17 12:09:05.606808 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:09:05.608580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:06.619688 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:09:06.620167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:06.624217 kernel: kvm_amd: TSC scaling supported Jan 17 12:09:06.624377 kernel: kvm_amd: Nested Virtualization enabled Jan 17 12:09:06.624399 kernel: kvm_amd: Nested Paging enabled Jan 17 12:09:06.624745 kernel: kvm_amd: LBR virtualization supported Jan 17 12:09:06.625976 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 12:09:06.626001 kernel: kvm_amd: Virtual GIF supported Jan 17 12:09:06.641927 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:06.653656 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:09:06.693229 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:09:06.706949 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:09:06.708831 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:06.716355 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:09:06.750877 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:09:06.752588 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:09:06.753837 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:09:06.755183 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:09:06.756574 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:09:06.758140 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:09:06.761652 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:09:06.763000 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:09:06.764293 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:09:06.764325 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:09:06.765282 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:09:06.767226 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:09:06.770279 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:09:06.784995 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:09:06.788871 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:09:06.791178 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:09:06.792817 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:09:06.794044 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:09:06.795234 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:09:06.795267 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:09:06.797519 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:09:06.800435 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:09:06.804842 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:09:06.809682 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:09:06.810278 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:09:06.811790 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:09:06.815666 jq[1449]: false Jan 17 12:09:06.816727 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:09:06.820366 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:09:06.824478 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:09:06.828164 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:09:06.838944 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:09:06.840827 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:09:06.841472 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:09:06.843083 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:09:06.846242 extend-filesystems[1450]: Found loop3 Jan 17 12:09:06.846242 extend-filesystems[1450]: Found loop4 Jan 17 12:09:06.846242 extend-filesystems[1450]: Found loop5 Jan 17 12:09:06.846242 extend-filesystems[1450]: Found sr0 Jan 17 12:09:06.846242 extend-filesystems[1450]: Found vda Jan 17 12:09:06.846242 extend-filesystems[1450]: Found vda1 Jan 17 12:09:06.846242 extend-filesystems[1450]: Found vda2 Jan 17 12:09:06.846242 extend-filesystems[1450]: Found vda3 Jan 17 12:09:06.846242 extend-filesystems[1450]: Found usr Jan 17 12:09:06.846242 extend-filesystems[1450]: Found vda4 Jan 17 12:09:06.846242 extend-filesystems[1450]: Found vda6 Jan 17 12:09:06.846242 extend-filesystems[1450]: Found vda7 Jan 17 12:09:06.846242 extend-filesystems[1450]: Found vda9 Jan 17 12:09:06.846242 extend-filesystems[1450]: Checking size of /dev/vda9 Jan 17 12:09:06.854703 dbus-daemon[1448]: [system] SELinux support is enabled Jan 17 12:09:06.849748 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:09:06.852805 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:09:06.855962 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:09:06.862092 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:09:06.862361 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:09:06.862824 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:09:06.863076 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:09:06.870779 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:09:06.871014 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:09:06.875623 update_engine[1463]: I20250117 12:09:06.874907 1463 main.cc:92] Flatcar Update Engine starting Jan 17 12:09:06.875874 jq[1465]: true Jan 17 12:09:06.876561 update_engine[1463]: I20250117 12:09:06.876523 1463 update_check_scheduler.cc:74] Next update check in 4m51s Jan 17 12:09:06.881678 extend-filesystems[1450]: Resized partition /dev/vda9 Jan 17 12:09:06.885367 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:09:06.886735 extend-filesystems[1477]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:09:06.891566 jq[1472]: true Jan 17 12:09:06.893732 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 12:09:06.902969 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1382) Jan 17 12:09:06.903781 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:09:06.907002 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:09:06.907034 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:09:06.908427 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:09:06.908443 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:09:06.912531 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:09:06.917545 tar[1470]: linux-amd64/helm Jan 17 12:09:06.989311 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 12:09:06.982255 systemd-logind[1458]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:09:06.982282 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:09:06.992749 systemd-logind[1458]: New seat seat0. Jan 17 12:09:07.004093 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:09:07.024854 extend-filesystems[1477]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:09:07.024854 extend-filesystems[1477]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:09:07.024854 extend-filesystems[1477]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 12:09:07.030982 extend-filesystems[1450]: Resized filesystem in /dev/vda9 Jan 17 12:09:07.027633 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:09:07.028266 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:09:07.036719 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:09:07.038942 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:09:07.041186 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:09:07.049567 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:09:07.368724 containerd[1471]: time="2025-01-17T12:09:07.368610996Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:09:07.400337 containerd[1471]: time="2025-01-17T12:09:07.399958142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:07.429967 containerd[1471]: time="2025-01-17T12:09:07.429903510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:07.430275 containerd[1471]: time="2025-01-17T12:09:07.430102994Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:09:07.430275 containerd[1471]: time="2025-01-17T12:09:07.430135003Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:09:07.430657 containerd[1471]: time="2025-01-17T12:09:07.430579798Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:09:07.430657 containerd[1471]: time="2025-01-17T12:09:07.430636237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:07.430793 containerd[1471]: time="2025-01-17T12:09:07.430740822Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:07.430815 containerd[1471]: time="2025-01-17T12:09:07.430790226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:07.431053 containerd[1471]: time="2025-01-17T12:09:07.431032040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:07.431053 containerd[1471]: time="2025-01-17T12:09:07.431049415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:07.431143 containerd[1471]: time="2025-01-17T12:09:07.431063666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:07.431143 containerd[1471]: time="2025-01-17T12:09:07.431078583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:07.431216 containerd[1471]: time="2025-01-17T12:09:07.431195715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:07.431528 containerd[1471]: time="2025-01-17T12:09:07.431476079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:07.431642 containerd[1471]: time="2025-01-17T12:09:07.431621915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:07.431642 containerd[1471]: time="2025-01-17T12:09:07.431639471Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:09:07.431804 containerd[1471]: time="2025-01-17T12:09:07.431769514Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:09:07.431853 containerd[1471]: time="2025-01-17T12:09:07.431836868Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:09:07.438237 containerd[1471]: time="2025-01-17T12:09:07.438125227Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:09:07.438237 containerd[1471]: time="2025-01-17T12:09:07.438175882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:09:07.438322 containerd[1471]: time="2025-01-17T12:09:07.438190969Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:09:07.438362 containerd[1471]: time="2025-01-17T12:09:07.438342439Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:09:07.438395 containerd[1471]: time="2025-01-17T12:09:07.438367645Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:09:07.438576 containerd[1471]: time="2025-01-17T12:09:07.438551860Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:09:07.441856 containerd[1471]: time="2025-01-17T12:09:07.441816830Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442321713Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442348472Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442362682Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442379352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442393200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442405979Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442419807Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442434794Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442447705Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442459445Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442472819Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442503549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442517155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.442700 containerd[1471]: time="2025-01-17T12:09:07.442531275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442544277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442558256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442571831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442584822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442619149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442659513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442676032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442688681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442702156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442715389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442729791Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442749727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442774449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443025 containerd[1471]: time="2025-01-17T12:09:07.442794606Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:09:07.443275 containerd[1471]: time="2025-01-17T12:09:07.442848828Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:09:07.443275 containerd[1471]: time="2025-01-17T12:09:07.442866455Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:09:07.443275 containerd[1471]: time="2025-01-17T12:09:07.442878721Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:09:07.443275 containerd[1471]: time="2025-01-17T12:09:07.442891087Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:09:07.443275 containerd[1471]: time="2025-01-17T12:09:07.442900441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443275 containerd[1471]: time="2025-01-17T12:09:07.442925929Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:09:07.443275 containerd[1471]: time="2025-01-17T12:09:07.442940855Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:09:07.443275 containerd[1471]: time="2025-01-17T12:09:07.442950984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:09:07.443432 containerd[1471]: time="2025-01-17T12:09:07.443228748Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:09:07.443432 containerd[1471]: time="2025-01-17T12:09:07.443287394Z" level=info msg="Connect containerd service" Jan 17 12:09:07.443432 containerd[1471]: time="2025-01-17T12:09:07.443329381Z" level=info msg="using legacy CRI server" Jan 17 12:09:07.443432 containerd[1471]: time="2025-01-17T12:09:07.443335660Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:09:07.443432 containerd[1471]: time="2025-01-17T12:09:07.443432252Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:09:07.444179 containerd[1471]: time="2025-01-17T12:09:07.444141376Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:09:07.444495 containerd[1471]: time="2025-01-17T12:09:07.444470630Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:09:07.444530 containerd[1471]: time="2025-01-17T12:09:07.444523633Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:09:07.446138 containerd[1471]: time="2025-01-17T12:09:07.444658453Z" level=info msg="Start subscribing containerd event" Jan 17 12:09:07.446138 containerd[1471]: time="2025-01-17T12:09:07.444711294Z" level=info msg="Start recovering state" Jan 17 12:09:07.446138 containerd[1471]: time="2025-01-17T12:09:07.444817855Z" level=info msg="Start event monitor" Jan 17 12:09:07.446138 containerd[1471]: time="2025-01-17T12:09:07.444836873Z" level=info msg="Start snapshots syncer" Jan 17 12:09:07.446138 containerd[1471]: time="2025-01-17T12:09:07.444846306Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:09:07.446138 containerd[1471]: time="2025-01-17T12:09:07.444854561Z" level=info msg="Start streaming server" Jan 17 12:09:07.445007 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:09:07.446496 containerd[1471]: time="2025-01-17T12:09:07.446467947Z" level=info msg="containerd successfully booted in 0.078966s" Jan 17 12:09:07.485340 tar[1470]: linux-amd64/LICENSE Jan 17 12:09:07.485470 tar[1470]: linux-amd64/README.md Jan 17 12:09:07.502945 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:09:07.523693 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:09:07.551775 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:09:07.562837 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:09:07.570910 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:09:07.571169 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:09:07.574332 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:09:07.588971 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:09:07.592709 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:09:07.595501 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:09:07.597003 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:09:07.649991 systemd-networkd[1409]: eth0: Gained IPv6LL Jan 17 12:09:07.653689 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:09:07.655931 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:09:07.669974 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:09:07.672945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:07.675680 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:09:07.696645 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:09:07.696999 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:09:07.699001 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:09:07.701371 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:09:08.775988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:08.777888 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:09:08.781907 systemd[1]: Startup finished in 772ms (kernel) + 5.797s (initrd) + 5.307s (userspace) = 11.878s. Jan 17 12:09:08.783628 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:09:09.614051 kubelet[1561]: E0117 12:09:09.613977 1561 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:09:09.618468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:09:09.618738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:09:09.619077 systemd[1]: kubelet.service: Consumed 1.795s CPU time. Jan 17 12:09:12.379586 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:09:12.380966 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:52310.service - OpenSSH per-connection server daemon (10.0.0.1:52310). Jan 17 12:09:12.429080 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 52310 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:12.431066 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:12.440949 systemd-logind[1458]: New session 1 of user core. Jan 17 12:09:12.442387 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:09:12.456907 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:09:12.582830 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:09:12.595074 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:09:12.598677 (systemd)[1579]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:09:12.712777 systemd[1579]: Queued start job for default target default.target. Jan 17 12:09:12.721956 systemd[1579]: Created slice app.slice - User Application Slice. Jan 17 12:09:12.721987 systemd[1579]: Reached target paths.target - Paths. Jan 17 12:09:12.722000 systemd[1579]: Reached target timers.target - Timers. Jan 17 12:09:12.723633 systemd[1579]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:09:12.736922 systemd[1579]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:09:12.737082 systemd[1579]: Reached target sockets.target - Sockets. Jan 17 12:09:12.737108 systemd[1579]: Reached target basic.target - Basic System. Jan 17 12:09:12.737155 systemd[1579]: Reached target default.target - Main User Target. Jan 17 12:09:12.737201 systemd[1579]: Startup finished in 130ms. Jan 17 12:09:12.737618 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:09:12.739231 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:09:12.802291 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:52326.service - OpenSSH per-connection server daemon (10.0.0.1:52326). Jan 17 12:09:12.844732 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 52326 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:12.846385 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:12.850968 systemd-logind[1458]: New session 2 of user core. Jan 17 12:09:12.862736 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:09:12.917157 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:12.924723 systemd[1]: sshd@1-10.0.0.49:22-10.0.0.1:52326.service: Deactivated successfully. Jan 17 12:09:12.926426 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:09:12.927854 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:09:12.929070 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:52338.service - OpenSSH per-connection server daemon (10.0.0.1:52338). Jan 17 12:09:12.929863 systemd-logind[1458]: Removed session 2. Jan 17 12:09:12.965875 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 52338 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:12.967413 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:12.971491 systemd-logind[1458]: New session 3 of user core. Jan 17 12:09:12.980711 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:09:13.030272 sshd[1597]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:13.037258 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:52338.service: Deactivated successfully. Jan 17 12:09:13.039026 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:09:13.040663 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:09:13.052883 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:52346.service - OpenSSH per-connection server daemon (10.0.0.1:52346). Jan 17 12:09:13.053927 systemd-logind[1458]: Removed session 3. Jan 17 12:09:13.084481 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 52346 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:13.086047 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:13.089996 systemd-logind[1458]: New session 4 of user core. Jan 17 12:09:13.099716 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:09:13.154081 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:13.163252 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:52346.service: Deactivated successfully. Jan 17 12:09:13.165030 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:09:13.166604 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:09:13.181944 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:52352.service - OpenSSH per-connection server daemon (10.0.0.1:52352). Jan 17 12:09:13.183060 systemd-logind[1458]: Removed session 4. Jan 17 12:09:13.214710 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 52352 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:13.216617 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:13.221777 systemd-logind[1458]: New session 5 of user core. Jan 17 12:09:13.231889 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:09:13.292050 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:09:13.292405 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:09:13.310550 sudo[1614]: pam_unix(sudo:session): session closed for user root Jan 17 12:09:13.312890 sshd[1611]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:13.325355 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:52352.service: Deactivated successfully. Jan 17 12:09:13.326926 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:09:13.328616 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:09:13.329880 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:52360.service - OpenSSH per-connection server daemon (10.0.0.1:52360). Jan 17 12:09:13.330759 systemd-logind[1458]: Removed session 5. Jan 17 12:09:13.366970 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 52360 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:13.368717 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:13.373275 systemd-logind[1458]: New session 6 of user core. Jan 17 12:09:13.387720 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:09:13.442493 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:09:13.442875 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:09:13.446253 sudo[1623]: pam_unix(sudo:session): session closed for user root Jan 17 12:09:13.451937 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:09:13.452262 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:09:13.469907 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:09:13.471762 auditctl[1626]: No rules Jan 17 12:09:13.473070 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:09:13.473376 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:09:13.475394 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:09:13.509003 augenrules[1644]: No rules Jan 17 12:09:13.511071 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:09:13.512639 sudo[1622]: pam_unix(sudo:session): session closed for user root Jan 17 12:09:13.514744 sshd[1619]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:13.525629 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:52360.service: Deactivated successfully. Jan 17 12:09:13.527317 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:09:13.528679 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:09:13.544877 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:52364.service - OpenSSH per-connection server daemon (10.0.0.1:52364). Jan 17 12:09:13.545775 systemd-logind[1458]: Removed session 6. Jan 17 12:09:13.576830 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 52364 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:13.578344 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:13.582213 systemd-logind[1458]: New session 7 of user core. Jan 17 12:09:13.590709 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:09:13.645808 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:09:13.646207 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:09:13.949830 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:09:13.950005 (dockerd)[1673]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:09:14.750332 dockerd[1673]: time="2025-01-17T12:09:14.750231019Z" level=info msg="Starting up" Jan 17 12:09:15.661817 dockerd[1673]: time="2025-01-17T12:09:15.661767594Z" level=info msg="Loading containers: start." Jan 17 12:09:15.920634 kernel: Initializing XFRM netlink socket Jan 17 12:09:16.010116 systemd-networkd[1409]: docker0: Link UP Jan 17 12:09:16.072906 dockerd[1673]: time="2025-01-17T12:09:16.072830806Z" level=info msg="Loading containers: done." Jan 17 12:09:16.089916 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck639788255-merged.mount: Deactivated successfully. Jan 17 12:09:16.092208 dockerd[1673]: time="2025-01-17T12:09:16.092128974Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:09:16.092367 dockerd[1673]: time="2025-01-17T12:09:16.092310527Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:09:16.092552 dockerd[1673]: time="2025-01-17T12:09:16.092491225Z" level=info msg="Daemon has completed initialization" Jan 17 12:09:16.216064 dockerd[1673]: time="2025-01-17T12:09:16.215974296Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:09:16.216261 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:09:17.112892 containerd[1471]: time="2025-01-17T12:09:17.112835398Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 17 12:09:18.247111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount419032165.mount: Deactivated successfully. Jan 17 12:09:19.398792 containerd[1471]: time="2025-01-17T12:09:19.398725810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:19.399984 containerd[1471]: time="2025-01-17T12:09:19.399935356Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 17 12:09:19.401214 containerd[1471]: time="2025-01-17T12:09:19.401172579Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:19.404335 containerd[1471]: time="2025-01-17T12:09:19.404294029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:19.405665 containerd[1471]: time="2025-01-17T12:09:19.405616435Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.292701635s" Jan 17 12:09:19.405665 containerd[1471]: time="2025-01-17T12:09:19.405653757Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 17 12:09:19.427488 containerd[1471]: time="2025-01-17T12:09:19.427445214Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 17 12:09:19.701147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:09:19.712750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:19.904388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:19.909511 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:09:19.967370 kubelet[1898]: E0117 12:09:19.967169 1898 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:09:19.975729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:09:19.975928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:09:21.456281 containerd[1471]: time="2025-01-17T12:09:21.456201745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:21.456991 containerd[1471]: time="2025-01-17T12:09:21.456926547Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 17 12:09:21.458497 containerd[1471]: time="2025-01-17T12:09:21.458457659Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:21.461634 containerd[1471]: time="2025-01-17T12:09:21.461583797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:21.462998 containerd[1471]: time="2025-01-17T12:09:21.462960570Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.035471396s" Jan 17 12:09:21.463056 containerd[1471]: time="2025-01-17T12:09:21.462996563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 17 12:09:21.490819 containerd[1471]: time="2025-01-17T12:09:21.490768603Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 17 12:09:23.264353 containerd[1471]: time="2025-01-17T12:09:23.264283455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:23.265196 containerd[1471]: time="2025-01-17T12:09:23.265136227Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 17 12:09:23.270835 containerd[1471]: time="2025-01-17T12:09:23.270812389Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:23.274902 containerd[1471]: time="2025-01-17T12:09:23.274876220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:23.275927 containerd[1471]: time="2025-01-17T12:09:23.275871309Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.785053168s" Jan 17 12:09:23.275927 containerd[1471]: time="2025-01-17T12:09:23.275924311Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 17 12:09:23.299946 containerd[1471]: time="2025-01-17T12:09:23.299899921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 17 12:09:24.313841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2825702061.mount: Deactivated successfully. Jan 17 12:09:25.296092 containerd[1471]: time="2025-01-17T12:09:25.296009622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:25.298608 containerd[1471]: time="2025-01-17T12:09:25.298519493Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 17 12:09:25.301902 containerd[1471]: time="2025-01-17T12:09:25.301840412Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:25.307959 containerd[1471]: time="2025-01-17T12:09:25.307871229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:25.308671 containerd[1471]: time="2025-01-17T12:09:25.308575468Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.008631294s" Jan 17 12:09:25.308671 containerd[1471]: time="2025-01-17T12:09:25.308645449Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 17 12:09:25.333786 containerd[1471]: time="2025-01-17T12:09:25.333731950Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:09:25.956106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount988625631.mount: Deactivated successfully. Jan 17 12:09:27.176756 containerd[1471]: time="2025-01-17T12:09:27.176691351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:27.193191 containerd[1471]: time="2025-01-17T12:09:27.193085033Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:09:27.196266 containerd[1471]: time="2025-01-17T12:09:27.196215207Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:27.200524 containerd[1471]: time="2025-01-17T12:09:27.200452515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:27.202160 containerd[1471]: time="2025-01-17T12:09:27.202084170Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.868298091s" Jan 17 12:09:27.202160 containerd[1471]: time="2025-01-17T12:09:27.202138271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:09:27.230446 containerd[1471]: time="2025-01-17T12:09:27.230394800Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:09:27.784661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1579613015.mount: Deactivated successfully. Jan 17 12:09:27.791574 containerd[1471]: time="2025-01-17T12:09:27.791521506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:27.792402 containerd[1471]: time="2025-01-17T12:09:27.792322236Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 17 12:09:27.793603 containerd[1471]: time="2025-01-17T12:09:27.793550515Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:27.795976 containerd[1471]: time="2025-01-17T12:09:27.795917069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:27.796914 containerd[1471]: time="2025-01-17T12:09:27.796858383Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 566.414108ms" Jan 17 12:09:27.796914 containerd[1471]: time="2025-01-17T12:09:27.796900109Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:09:27.821132 containerd[1471]: time="2025-01-17T12:09:27.821059804Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 17 12:09:28.405857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3629912019.mount: Deactivated successfully. Jan 17 12:09:30.201417 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:09:30.207796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:30.379397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:30.391062 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:09:30.449395 kubelet[2057]: E0117 12:09:30.449329 2057 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:09:30.453965 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:09:30.454192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:09:31.518133 containerd[1471]: time="2025-01-17T12:09:31.518050960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:31.519209 containerd[1471]: time="2025-01-17T12:09:31.519103002Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 17 12:09:31.520545 containerd[1471]: time="2025-01-17T12:09:31.520494187Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:31.524075 containerd[1471]: time="2025-01-17T12:09:31.524008765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:31.525349 containerd[1471]: time="2025-01-17T12:09:31.525303678Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.704000053s" Jan 17 12:09:31.525401 containerd[1471]: time="2025-01-17T12:09:31.525353876Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 17 12:09:34.503012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:34.518980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:34.540184 systemd[1]: Reloading requested from client PID 2150 ('systemctl') (unit session-7.scope)... Jan 17 12:09:34.540200 systemd[1]: Reloading... Jan 17 12:09:34.628246 zram_generator::config[2189]: No configuration found. Jan 17 12:09:34.926682 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:35.005693 systemd[1]: Reloading finished in 465 ms. Jan 17 12:09:35.058172 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:09:35.058271 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:09:35.058548 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:35.061390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:35.209537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:35.214933 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:09:35.269424 kubelet[2238]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:09:35.269424 kubelet[2238]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:09:35.269424 kubelet[2238]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:09:35.269830 kubelet[2238]: I0117 12:09:35.269473 2238 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:09:35.479230 kubelet[2238]: I0117 12:09:35.479117 2238 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:09:35.479230 kubelet[2238]: I0117 12:09:35.479158 2238 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:09:35.479444 kubelet[2238]: I0117 12:09:35.479417 2238 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:09:35.499378 kubelet[2238]: I0117 12:09:35.499317 2238 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:09:35.500205 kubelet[2238]: E0117 12:09:35.500172 2238 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:35.514184 kubelet[2238]: I0117 12:09:35.514139 2238 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:09:35.514462 kubelet[2238]: I0117 12:09:35.514414 2238 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:09:35.514696 kubelet[2238]: I0117 12:09:35.514447 2238 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:09:35.514879 kubelet[2238]: I0117 12:09:35.514705 2238 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:09:35.514879 kubelet[2238]: I0117 12:09:35.514718 2238 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:09:35.514935 kubelet[2238]: I0117 12:09:35.514893 2238 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:09:35.519673 kubelet[2238]: I0117 12:09:35.519640 2238 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:09:35.519673 kubelet[2238]: I0117 12:09:35.519664 2238 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:09:35.519733 kubelet[2238]: I0117 12:09:35.519704 2238 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:09:35.519758 kubelet[2238]: I0117 12:09:35.519733 2238 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:09:35.529126 kubelet[2238]: W0117 12:09:35.529064 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:35.529126 kubelet[2238]: E0117 12:09:35.529125 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:35.529204 kubelet[2238]: W0117 12:09:35.529159 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:35.529242 kubelet[2238]: E0117 12:09:35.529223 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:35.546453 kubelet[2238]: I0117 12:09:35.546427 2238 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:09:35.551531 kubelet[2238]: I0117 12:09:35.551515 2238 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:09:35.551632 kubelet[2238]: W0117 12:09:35.551604 2238 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:09:35.552420 kubelet[2238]: I0117 12:09:35.552396 2238 server.go:1264] "Started kubelet" Jan 17 12:09:35.553581 kubelet[2238]: I0117 12:09:35.552507 2238 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:09:35.553581 kubelet[2238]: I0117 12:09:35.552735 2238 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:09:35.553581 kubelet[2238]: I0117 12:09:35.553128 2238 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:09:35.554910 kubelet[2238]: I0117 12:09:35.554825 2238 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:09:35.555921 kubelet[2238]: I0117 12:09:35.555297 2238 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:09:35.556437 kubelet[2238]: E0117 12:09:35.556400 2238 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:09:35.557869 kubelet[2238]: E0117 12:09:35.557169 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:09:35.557869 kubelet[2238]: I0117 12:09:35.557219 2238 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:09:35.557869 kubelet[2238]: I0117 12:09:35.557340 2238 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:09:35.557869 kubelet[2238]: I0117 12:09:35.557409 2238 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:09:35.557869 kubelet[2238]: W0117 12:09:35.557797 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:35.557869 kubelet[2238]: E0117 12:09:35.557841 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:35.558848 kubelet[2238]: E0117 12:09:35.558053 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="200ms" Jan 17 12:09:35.559217 kubelet[2238]: I0117 12:09:35.559180 2238 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:09:35.562778 kubelet[2238]: I0117 12:09:35.562743 2238 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:09:35.562778 kubelet[2238]: I0117 12:09:35.562764 2238 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:09:35.580477 kubelet[2238]: E0117 12:09:35.580313 2238 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b79981f86a3d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:09:35.552365523 +0000 UTC m=+0.333501039,LastTimestamp:2025-01-17 12:09:35.552365523 +0000 UTC m=+0.333501039,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:09:35.581535 kubelet[2238]: I0117 12:09:35.581486 2238 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:09:35.582928 kubelet[2238]: I0117 12:09:35.582909 2238 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:09:35.583002 kubelet[2238]: I0117 12:09:35.582947 2238 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:09:35.583002 kubelet[2238]: I0117 12:09:35.582971 2238 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:09:35.583074 kubelet[2238]: E0117 12:09:35.583027 2238 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:09:35.587010 kubelet[2238]: W0117 12:09:35.586554 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:35.587010 kubelet[2238]: E0117 12:09:35.586645 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:35.591771 kubelet[2238]: I0117 12:09:35.591749 2238 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:09:35.591771 kubelet[2238]: I0117 12:09:35.591763 2238 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:09:35.591877 kubelet[2238]: I0117 12:09:35.591783 2238 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:09:35.658706 kubelet[2238]: I0117 12:09:35.658643 2238 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:35.659057 kubelet[2238]: E0117 12:09:35.659013 2238 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Jan 17 12:09:35.683272 kubelet[2238]: E0117 12:09:35.683235 2238 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:09:35.759268 kubelet[2238]: E0117 12:09:35.759136 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="400ms" Jan 17 12:09:35.860749 kubelet[2238]: I0117 12:09:35.860715 2238 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:35.861134 kubelet[2238]: E0117 12:09:35.861088 2238 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Jan 17 12:09:35.884222 kubelet[2238]: E0117 12:09:35.884175 2238 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:09:36.027378 kubelet[2238]: I0117 12:09:36.027237 2238 policy_none.go:49] "None policy: Start" Jan 17 12:09:36.028356 kubelet[2238]: I0117 12:09:36.028321 2238 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:09:36.028356 kubelet[2238]: I0117 12:09:36.028357 2238 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:09:36.063422 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:09:36.076997 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:09:36.080047 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:09:36.101909 kubelet[2238]: I0117 12:09:36.101844 2238 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:09:36.102298 kubelet[2238]: I0117 12:09:36.102078 2238 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:09:36.102298 kubelet[2238]: I0117 12:09:36.102225 2238 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:09:36.103318 kubelet[2238]: E0117 12:09:36.103296 2238 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 12:09:36.160187 kubelet[2238]: E0117 12:09:36.160121 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="800ms" Jan 17 12:09:36.263198 kubelet[2238]: I0117 12:09:36.263139 2238 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:36.263573 kubelet[2238]: E0117 12:09:36.263532 2238 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Jan 17 12:09:36.284839 kubelet[2238]: I0117 12:09:36.284722 2238 topology_manager.go:215] "Topology Admit Handler" podUID="98b25ed38c40ca52dbc6083002114880" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:09:36.286045 kubelet[2238]: I0117 12:09:36.286004 2238 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:09:36.287328 kubelet[2238]: I0117 12:09:36.286916 2238 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:09:36.293645 systemd[1]: Created slice kubepods-burstable-pod98b25ed38c40ca52dbc6083002114880.slice - libcontainer container kubepods-burstable-pod98b25ed38c40ca52dbc6083002114880.slice. Jan 17 12:09:36.313195 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 17 12:09:36.324489 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 17 12:09:36.361556 kubelet[2238]: I0117 12:09:36.361489 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:36.361556 kubelet[2238]: I0117 12:09:36.361547 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:36.361556 kubelet[2238]: I0117 12:09:36.361569 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:36.361806 kubelet[2238]: I0117 12:09:36.361699 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:09:36.361806 kubelet[2238]: I0117 12:09:36.361761 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:36.361867 kubelet[2238]: I0117 12:09:36.361820 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98b25ed38c40ca52dbc6083002114880-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"98b25ed38c40ca52dbc6083002114880\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:36.361867 kubelet[2238]: I0117 12:09:36.361855 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98b25ed38c40ca52dbc6083002114880-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"98b25ed38c40ca52dbc6083002114880\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:36.361927 kubelet[2238]: I0117 12:09:36.361881 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:36.361927 kubelet[2238]: I0117 12:09:36.361905 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98b25ed38c40ca52dbc6083002114880-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"98b25ed38c40ca52dbc6083002114880\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:36.399315 kubelet[2238]: W0117 12:09:36.399229 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:36.399315 kubelet[2238]: E0117 12:09:36.399314 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:36.523504 kubelet[2238]: W0117 12:09:36.523416 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:36.523504 kubelet[2238]: E0117 12:09:36.523502 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:36.536990 kubelet[2238]: W0117 12:09:36.536878 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:36.536990 kubelet[2238]: E0117 12:09:36.536925 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:36.612317 kubelet[2238]: E0117 12:09:36.612265 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:36.613052 containerd[1471]: time="2025-01-17T12:09:36.612980080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:98b25ed38c40ca52dbc6083002114880,Namespace:kube-system,Attempt:0,}" Jan 17 12:09:36.623171 kubelet[2238]: E0117 12:09:36.623129 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:36.623492 containerd[1471]: time="2025-01-17T12:09:36.623460980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 17 12:09:36.626883 kubelet[2238]: E0117 12:09:36.626859 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:36.627329 containerd[1471]: time="2025-01-17T12:09:36.627305396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 17 12:09:36.961430 kubelet[2238]: E0117 12:09:36.961368 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="1.6s" Jan 17 12:09:37.065957 kubelet[2238]: I0117 12:09:37.065918 2238 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:37.066419 kubelet[2238]: E0117 12:09:37.066373 2238 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Jan 17 12:09:37.147338 kubelet[2238]: W0117 12:09:37.147263 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:37.147338 kubelet[2238]: E0117 12:09:37.147332 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:37.250832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263348461.mount: Deactivated successfully. Jan 17 12:09:37.257076 containerd[1471]: time="2025-01-17T12:09:37.257031941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:37.258104 containerd[1471]: time="2025-01-17T12:09:37.258028916Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:37.259022 containerd[1471]: time="2025-01-17T12:09:37.258972424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:09:37.260260 containerd[1471]: time="2025-01-17T12:09:37.260216453Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:37.261161 containerd[1471]: time="2025-01-17T12:09:37.261105634Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:09:37.262283 containerd[1471]: time="2025-01-17T12:09:37.262248085Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:37.263532 containerd[1471]: time="2025-01-17T12:09:37.263471719Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:09:37.266798 containerd[1471]: time="2025-01-17T12:09:37.266760647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:37.268883 containerd[1471]: time="2025-01-17T12:09:37.268845504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 641.482368ms" Jan 17 12:09:37.269717 containerd[1471]: time="2025-01-17T12:09:37.269683504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 656.58322ms" Jan 17 12:09:37.270413 containerd[1471]: time="2025-01-17T12:09:37.270367825Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 646.859213ms" Jan 17 12:09:37.499075 containerd[1471]: time="2025-01-17T12:09:37.488497300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:37.499075 containerd[1471]: time="2025-01-17T12:09:37.488656365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:37.499075 containerd[1471]: time="2025-01-17T12:09:37.488671516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:37.499075 containerd[1471]: time="2025-01-17T12:09:37.488818628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:37.499075 containerd[1471]: time="2025-01-17T12:09:37.494765951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:37.499075 containerd[1471]: time="2025-01-17T12:09:37.494849800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:37.499075 containerd[1471]: time="2025-01-17T12:09:37.494874377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:37.499075 containerd[1471]: time="2025-01-17T12:09:37.495003791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:37.504422 containerd[1471]: time="2025-01-17T12:09:37.504211473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:37.504422 containerd[1471]: time="2025-01-17T12:09:37.504370006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:37.504422 containerd[1471]: time="2025-01-17T12:09:37.504419432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:37.504631 containerd[1471]: time="2025-01-17T12:09:37.504546890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:37.526794 systemd[1]: Started cri-containerd-f32a38866b58d78fd9812b5d6989ac75d03fe3b87b39098a0ea5d0fda38673b9.scope - libcontainer container f32a38866b58d78fd9812b5d6989ac75d03fe3b87b39098a0ea5d0fda38673b9. Jan 17 12:09:37.530845 systemd[1]: Started cri-containerd-99a5dbf9d8fd1fa29270f4a4c3e86ee7268bee67dc3d02c4bca801b8fca1d48d.scope - libcontainer container 99a5dbf9d8fd1fa29270f4a4c3e86ee7268bee67dc3d02c4bca801b8fca1d48d. Jan 17 12:09:37.563193 systemd[1]: Started cri-containerd-a095c9e784f77abdb042127169ef2a8106a2629b611a45fac65ecc54ae90578b.scope - libcontainer container a095c9e784f77abdb042127169ef2a8106a2629b611a45fac65ecc54ae90578b. Jan 17 12:09:37.606254 containerd[1471]: time="2025-01-17T12:09:37.606091036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:98b25ed38c40ca52dbc6083002114880,Namespace:kube-system,Attempt:0,} returns sandbox id \"99a5dbf9d8fd1fa29270f4a4c3e86ee7268bee67dc3d02c4bca801b8fca1d48d\"" Jan 17 12:09:37.610732 kubelet[2238]: E0117 12:09:37.610705 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:37.612647 containerd[1471]: time="2025-01-17T12:09:37.611921810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a095c9e784f77abdb042127169ef2a8106a2629b611a45fac65ecc54ae90578b\"" Jan 17 12:09:37.614405 kubelet[2238]: E0117 12:09:37.614367 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:37.634217 containerd[1471]: time="2025-01-17T12:09:37.634131715Z" level=info msg="CreateContainer within sandbox \"99a5dbf9d8fd1fa29270f4a4c3e86ee7268bee67dc3d02c4bca801b8fca1d48d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:09:37.634717 containerd[1471]: time="2025-01-17T12:09:37.634268568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f32a38866b58d78fd9812b5d6989ac75d03fe3b87b39098a0ea5d0fda38673b9\"" Jan 17 12:09:37.635359 kubelet[2238]: E0117 12:09:37.635335 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:37.637384 containerd[1471]: time="2025-01-17T12:09:37.637353198Z" level=info msg="CreateContainer within sandbox \"f32a38866b58d78fd9812b5d6989ac75d03fe3b87b39098a0ea5d0fda38673b9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:09:37.637982 containerd[1471]: time="2025-01-17T12:09:37.637956787Z" level=info msg="CreateContainer within sandbox \"a095c9e784f77abdb042127169ef2a8106a2629b611a45fac65ecc54ae90578b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:09:37.660422 containerd[1471]: time="2025-01-17T12:09:37.660373246Z" level=info msg="CreateContainer within sandbox \"99a5dbf9d8fd1fa29270f4a4c3e86ee7268bee67dc3d02c4bca801b8fca1d48d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2beafe65ec74713ac9dc0415467f2f69054921e0b4fcf521dca80c7be12e825f\"" Jan 17 12:09:37.660915 containerd[1471]: time="2025-01-17T12:09:37.660884002Z" level=info msg="StartContainer for \"2beafe65ec74713ac9dc0415467f2f69054921e0b4fcf521dca80c7be12e825f\"" Jan 17 12:09:37.666379 containerd[1471]: time="2025-01-17T12:09:37.666332752Z" level=info msg="CreateContainer within sandbox \"f32a38866b58d78fd9812b5d6989ac75d03fe3b87b39098a0ea5d0fda38673b9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aa8fc5a1f649b099b08ba27324639f31c2967bf777a0e89a3c1a429c6e5a1ae6\"" Jan 17 12:09:37.666900 containerd[1471]: time="2025-01-17T12:09:37.666865598Z" level=info msg="StartContainer for \"aa8fc5a1f649b099b08ba27324639f31c2967bf777a0e89a3c1a429c6e5a1ae6\"" Jan 17 12:09:37.670375 containerd[1471]: time="2025-01-17T12:09:37.670344535Z" level=info msg="CreateContainer within sandbox \"a095c9e784f77abdb042127169ef2a8106a2629b611a45fac65ecc54ae90578b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"550499b109c8efaa17dcfd22da8e0e5a6f754ae5bbe15ad8524135f03b48251c\"" Jan 17 12:09:37.670943 containerd[1471]: time="2025-01-17T12:09:37.670923287Z" level=info msg="StartContainer for \"550499b109c8efaa17dcfd22da8e0e5a6f754ae5bbe15ad8524135f03b48251c\"" Jan 17 12:09:37.681924 kubelet[2238]: E0117 12:09:37.681842 2238 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.49:6443: connect: connection refused Jan 17 12:09:37.688385 systemd[1]: Started cri-containerd-2beafe65ec74713ac9dc0415467f2f69054921e0b4fcf521dca80c7be12e825f.scope - libcontainer container 2beafe65ec74713ac9dc0415467f2f69054921e0b4fcf521dca80c7be12e825f. Jan 17 12:09:37.696731 systemd[1]: Started cri-containerd-aa8fc5a1f649b099b08ba27324639f31c2967bf777a0e89a3c1a429c6e5a1ae6.scope - libcontainer container aa8fc5a1f649b099b08ba27324639f31c2967bf777a0e89a3c1a429c6e5a1ae6. Jan 17 12:09:37.699991 systemd[1]: Started cri-containerd-550499b109c8efaa17dcfd22da8e0e5a6f754ae5bbe15ad8524135f03b48251c.scope - libcontainer container 550499b109c8efaa17dcfd22da8e0e5a6f754ae5bbe15ad8524135f03b48251c. Jan 17 12:09:37.760495 containerd[1471]: time="2025-01-17T12:09:37.760274001Z" level=info msg="StartContainer for \"2beafe65ec74713ac9dc0415467f2f69054921e0b4fcf521dca80c7be12e825f\" returns successfully" Jan 17 12:09:37.771669 containerd[1471]: time="2025-01-17T12:09:37.771626645Z" level=info msg="StartContainer for \"aa8fc5a1f649b099b08ba27324639f31c2967bf777a0e89a3c1a429c6e5a1ae6\" returns successfully" Jan 17 12:09:37.785907 containerd[1471]: time="2025-01-17T12:09:37.785014701Z" level=info msg="StartContainer for \"550499b109c8efaa17dcfd22da8e0e5a6f754ae5bbe15ad8524135f03b48251c\" returns successfully" Jan 17 12:09:38.595523 kubelet[2238]: E0117 12:09:38.595432 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:38.598015 kubelet[2238]: E0117 12:09:38.597810 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:38.600236 kubelet[2238]: E0117 12:09:38.600177 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:38.670084 kubelet[2238]: I0117 12:09:38.669509 2238 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:38.835570 kubelet[2238]: E0117 12:09:38.835483 2238 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 12:09:38.929722 kubelet[2238]: I0117 12:09:38.928490 2238 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:09:39.529924 kubelet[2238]: I0117 12:09:39.529862 2238 apiserver.go:52] "Watching apiserver" Jan 17 12:09:39.558008 kubelet[2238]: I0117 12:09:39.557947 2238 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:09:39.611382 kubelet[2238]: E0117 12:09:39.611332 2238 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:39.611798 kubelet[2238]: E0117 12:09:39.611778 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:40.572106 kubelet[2238]: E0117 12:09:40.572069 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:40.602607 kubelet[2238]: E0117 12:09:40.602564 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:41.692334 systemd[1]: Reloading requested from client PID 2523 ('systemctl') (unit session-7.scope)... Jan 17 12:09:41.692348 systemd[1]: Reloading... Jan 17 12:09:41.767618 zram_generator::config[2565]: No configuration found. Jan 17 12:09:41.869982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:41.961443 systemd[1]: Reloading finished in 268 ms. Jan 17 12:09:42.013617 kubelet[2238]: I0117 12:09:42.013539 2238 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:09:42.013731 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:42.021346 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:09:42.021810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:42.034951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:42.202342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:42.207812 (kubelet)[2607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:09:42.265881 kubelet[2607]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:09:42.265881 kubelet[2607]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:09:42.265881 kubelet[2607]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:09:42.266285 kubelet[2607]: I0117 12:09:42.265886 2607 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:09:42.272566 kubelet[2607]: I0117 12:09:42.272511 2607 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:09:42.272566 kubelet[2607]: I0117 12:09:42.272545 2607 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:09:42.272832 kubelet[2607]: I0117 12:09:42.272802 2607 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:09:42.274164 kubelet[2607]: I0117 12:09:42.274138 2607 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:09:42.275329 kubelet[2607]: I0117 12:09:42.275281 2607 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:09:42.283173 kubelet[2607]: I0117 12:09:42.283145 2607 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:09:42.283429 kubelet[2607]: I0117 12:09:42.283375 2607 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:09:42.283856 kubelet[2607]: I0117 12:09:42.283413 2607 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:09:42.283977 kubelet[2607]: I0117 12:09:42.283870 2607 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:09:42.283977 kubelet[2607]: I0117 12:09:42.283880 2607 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:09:42.283977 kubelet[2607]: I0117 12:09:42.283930 2607 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:09:42.284086 kubelet[2607]: I0117 12:09:42.284039 2607 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:09:42.284086 kubelet[2607]: I0117 12:09:42.284052 2607 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:09:42.284086 kubelet[2607]: I0117 12:09:42.284076 2607 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:09:42.284176 kubelet[2607]: I0117 12:09:42.284098 2607 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:09:42.285623 kubelet[2607]: I0117 12:09:42.285098 2607 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:09:42.285623 kubelet[2607]: I0117 12:09:42.285277 2607 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:09:42.285713 kubelet[2607]: I0117 12:09:42.285705 2607 server.go:1264] "Started kubelet" Jan 17 12:09:42.286624 kubelet[2607]: I0117 12:09:42.286520 2607 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:09:42.287610 kubelet[2607]: I0117 12:09:42.286952 2607 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:09:42.287610 kubelet[2607]: I0117 12:09:42.287016 2607 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:09:42.287610 kubelet[2607]: I0117 12:09:42.287285 2607 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:09:42.288945 kubelet[2607]: I0117 12:09:42.288905 2607 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:09:42.290361 kubelet[2607]: I0117 12:09:42.290187 2607 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:09:42.298613 kubelet[2607]: I0117 12:09:42.293582 2607 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:09:42.298613 kubelet[2607]: I0117 12:09:42.293806 2607 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:09:42.303063 kubelet[2607]: E0117 12:09:42.303001 2607 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:09:42.308616 kubelet[2607]: I0117 12:09:42.305831 2607 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:09:42.308616 kubelet[2607]: I0117 12:09:42.305852 2607 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:09:42.308616 kubelet[2607]: I0117 12:09:42.305965 2607 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:09:42.311726 kubelet[2607]: I0117 12:09:42.311302 2607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:09:42.313559 kubelet[2607]: I0117 12:09:42.313515 2607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:09:42.313624 kubelet[2607]: I0117 12:09:42.313573 2607 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:09:42.313669 kubelet[2607]: I0117 12:09:42.313623 2607 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:09:42.313710 kubelet[2607]: E0117 12:09:42.313681 2607 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:09:42.342239 kubelet[2607]: I0117 12:09:42.342194 2607 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:09:42.342239 kubelet[2607]: I0117 12:09:42.342222 2607 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:09:42.342239 kubelet[2607]: I0117 12:09:42.342245 2607 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:09:42.342468 kubelet[2607]: I0117 12:09:42.342430 2607 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:09:42.342468 kubelet[2607]: I0117 12:09:42.342443 2607 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:09:42.342468 kubelet[2607]: I0117 12:09:42.342463 2607 policy_none.go:49] "None policy: Start" Jan 17 12:09:42.343468 kubelet[2607]: I0117 12:09:42.343442 2607 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:09:42.343526 kubelet[2607]: I0117 12:09:42.343479 2607 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:09:42.343684 kubelet[2607]: I0117 12:09:42.343664 2607 state_mem.go:75] "Updated machine memory state" Jan 17 12:09:42.348176 kubelet[2607]: I0117 12:09:42.348151 2607 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:09:42.348463 kubelet[2607]: I0117 12:09:42.348336 2607 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:09:42.348463 kubelet[2607]: I0117 12:09:42.348431 2607 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:09:42.398402 kubelet[2607]: I0117 12:09:42.398368 2607 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:42.413827 kubelet[2607]: I0117 12:09:42.413788 2607 topology_manager.go:215] "Topology Admit Handler" podUID="98b25ed38c40ca52dbc6083002114880" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:09:42.413908 kubelet[2607]: I0117 12:09:42.413892 2607 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:09:42.414219 kubelet[2607]: I0117 12:09:42.413966 2607 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:09:42.424610 kubelet[2607]: I0117 12:09:42.424497 2607 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 17 12:09:42.424610 kubelet[2607]: I0117 12:09:42.424559 2607 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:09:42.425193 kubelet[2607]: E0117 12:09:42.425168 2607 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 17 12:09:42.595773 kubelet[2607]: I0117 12:09:42.595644 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:42.595773 kubelet[2607]: I0117 12:09:42.595712 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:42.595773 kubelet[2607]: I0117 12:09:42.595738 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:42.595773 kubelet[2607]: I0117 12:09:42.595761 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98b25ed38c40ca52dbc6083002114880-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"98b25ed38c40ca52dbc6083002114880\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:42.595985 kubelet[2607]: I0117 12:09:42.595797 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98b25ed38c40ca52dbc6083002114880-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"98b25ed38c40ca52dbc6083002114880\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:42.595985 kubelet[2607]: I0117 12:09:42.595819 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:42.595985 kubelet[2607]: I0117 12:09:42.595839 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:42.595985 kubelet[2607]: I0117 12:09:42.595859 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:09:42.595985 kubelet[2607]: I0117 12:09:42.595889 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98b25ed38c40ca52dbc6083002114880-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"98b25ed38c40ca52dbc6083002114880\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:42.726496 kubelet[2607]: E0117 12:09:42.726458 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:42.726669 kubelet[2607]: E0117 12:09:42.726528 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:42.726669 kubelet[2607]: E0117 12:09:42.726534 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:43.285073 kubelet[2607]: I0117 12:09:43.285027 2607 apiserver.go:52] "Watching apiserver" Jan 17 12:09:43.294614 kubelet[2607]: I0117 12:09:43.294557 2607 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:09:43.327555 kubelet[2607]: E0117 12:09:43.327521 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:43.357943 kubelet[2607]: E0117 12:09:43.357903 2607 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:43.358345 kubelet[2607]: E0117 12:09:43.358319 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:43.407476 kubelet[2607]: E0117 12:09:43.406475 2607 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:43.407476 kubelet[2607]: E0117 12:09:43.406876 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:43.407476 kubelet[2607]: I0117 12:09:43.407043 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.407022844 podStartE2EDuration="1.407022844s" podCreationTimestamp="2025-01-17 12:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:09:43.406317681 +0000 UTC m=+1.193813435" watchObservedRunningTime="2025-01-17 12:09:43.407022844 +0000 UTC m=+1.194518598" Jan 17 12:09:43.417490 kubelet[2607]: I0117 12:09:43.417063 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.416969209 podStartE2EDuration="3.416969209s" podCreationTimestamp="2025-01-17 12:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:09:43.416727645 +0000 UTC m=+1.204223419" watchObservedRunningTime="2025-01-17 12:09:43.416969209 +0000 UTC m=+1.204464963" Jan 17 12:09:43.433606 kubelet[2607]: I0117 12:09:43.433520 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.433497509 podStartE2EDuration="1.433497509s" podCreationTimestamp="2025-01-17 12:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:09:43.423408584 +0000 UTC m=+1.210904338" watchObservedRunningTime="2025-01-17 12:09:43.433497509 +0000 UTC m=+1.220993263" Jan 17 12:09:44.329634 kubelet[2607]: E0117 12:09:44.329570 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:44.332612 kubelet[2607]: E0117 12:09:44.330261 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:45.330922 kubelet[2607]: E0117 12:09:45.330893 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:47.348090 sudo[1655]: pam_unix(sudo:session): session closed for user root Jan 17 12:09:47.351183 sshd[1652]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:47.356161 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:52364.service: Deactivated successfully. Jan 17 12:09:47.358937 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:09:47.359198 systemd[1]: session-7.scope: Consumed 4.935s CPU time, 194.4M memory peak, 0B memory swap peak. Jan 17 12:09:47.359965 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:09:47.360981 systemd-logind[1458]: Removed session 7. Jan 17 12:09:49.937406 kubelet[2607]: E0117 12:09:49.937373 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:50.338854 kubelet[2607]: E0117 12:09:50.338809 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:51.909132 update_engine[1463]: I20250117 12:09:51.908727 1463 update_attempter.cc:509] Updating boot flags... Jan 17 12:09:51.943651 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2702) Jan 17 12:09:51.976712 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2705) Jan 17 12:09:52.012631 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2705) Jan 17 12:09:52.700697 kubelet[2607]: E0117 12:09:52.700651 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:53.343982 kubelet[2607]: E0117 12:09:53.343943 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:53.885902 kubelet[2607]: E0117 12:09:53.885839 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:56.492349 kubelet[2607]: I0117 12:09:56.492307 2607 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:09:56.492883 kubelet[2607]: I0117 12:09:56.492848 2607 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:09:56.492926 containerd[1471]: time="2025-01-17T12:09:56.492717510Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:09:57.326777 kubelet[2607]: I0117 12:09:57.326716 2607 topology_manager.go:215] "Topology Admit Handler" podUID="40eb870d-b686-45c5-95fd-6671c3ae559b" podNamespace="kube-system" podName="kube-proxy-q8pd5" Jan 17 12:09:57.333909 systemd[1]: Created slice kubepods-besteffort-pod40eb870d_b686_45c5_95fd_6671c3ae559b.slice - libcontainer container kubepods-besteffort-pod40eb870d_b686_45c5_95fd_6671c3ae559b.slice. Jan 17 12:09:57.377615 kubelet[2607]: I0117 12:09:57.377566 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/40eb870d-b686-45c5-95fd-6671c3ae559b-kube-proxy\") pod \"kube-proxy-q8pd5\" (UID: \"40eb870d-b686-45c5-95fd-6671c3ae559b\") " pod="kube-system/kube-proxy-q8pd5" Jan 17 12:09:57.377615 kubelet[2607]: I0117 12:09:57.377616 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swb9h\" (UniqueName: \"kubernetes.io/projected/40eb870d-b686-45c5-95fd-6671c3ae559b-kube-api-access-swb9h\") pod \"kube-proxy-q8pd5\" (UID: \"40eb870d-b686-45c5-95fd-6671c3ae559b\") " pod="kube-system/kube-proxy-q8pd5" Jan 17 12:09:57.377791 kubelet[2607]: I0117 12:09:57.377635 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40eb870d-b686-45c5-95fd-6671c3ae559b-xtables-lock\") pod \"kube-proxy-q8pd5\" (UID: \"40eb870d-b686-45c5-95fd-6671c3ae559b\") " pod="kube-system/kube-proxy-q8pd5" Jan 17 12:09:57.377791 kubelet[2607]: I0117 12:09:57.377726 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40eb870d-b686-45c5-95fd-6671c3ae559b-lib-modules\") pod \"kube-proxy-q8pd5\" (UID: \"40eb870d-b686-45c5-95fd-6671c3ae559b\") " pod="kube-system/kube-proxy-q8pd5" Jan 17 12:09:57.487517 kubelet[2607]: I0117 12:09:57.487452 2607 topology_manager.go:215] "Topology Admit Handler" podUID="7ee3cc7a-0160-498e-b0bf-433c4187c564" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-44ndx" Jan 17 12:09:57.498737 systemd[1]: Created slice kubepods-besteffort-pod7ee3cc7a_0160_498e_b0bf_433c4187c564.slice - libcontainer container kubepods-besteffort-pod7ee3cc7a_0160_498e_b0bf_433c4187c564.slice. Jan 17 12:09:57.643286 kubelet[2607]: E0117 12:09:57.643153 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:57.643750 containerd[1471]: time="2025-01-17T12:09:57.643638106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q8pd5,Uid:40eb870d-b686-45c5-95fd-6671c3ae559b,Namespace:kube-system,Attempt:0,}" Jan 17 12:09:57.667899 containerd[1471]: time="2025-01-17T12:09:57.667442996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:57.668070 containerd[1471]: time="2025-01-17T12:09:57.667579073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:57.668070 containerd[1471]: time="2025-01-17T12:09:57.668048173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:57.668205 containerd[1471]: time="2025-01-17T12:09:57.668144998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:57.680628 kubelet[2607]: I0117 12:09:57.680507 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qllhd\" (UniqueName: \"kubernetes.io/projected/7ee3cc7a-0160-498e-b0bf-433c4187c564-kube-api-access-qllhd\") pod \"tigera-operator-7bc55997bb-44ndx\" (UID: \"7ee3cc7a-0160-498e-b0bf-433c4187c564\") " pod="tigera-operator/tigera-operator-7bc55997bb-44ndx" Jan 17 12:09:57.680628 kubelet[2607]: I0117 12:09:57.680554 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ee3cc7a-0160-498e-b0bf-433c4187c564-var-lib-calico\") pod \"tigera-operator-7bc55997bb-44ndx\" (UID: \"7ee3cc7a-0160-498e-b0bf-433c4187c564\") " pod="tigera-operator/tigera-operator-7bc55997bb-44ndx" Jan 17 12:09:57.687767 systemd[1]: Started cri-containerd-36057f5231f0582b4e760244ee291fb38e3cd14ef3a9d2c1495257654dcdd7a3.scope - libcontainer container 36057f5231f0582b4e760244ee291fb38e3cd14ef3a9d2c1495257654dcdd7a3. Jan 17 12:09:57.708438 containerd[1471]: time="2025-01-17T12:09:57.708355987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q8pd5,Uid:40eb870d-b686-45c5-95fd-6671c3ae559b,Namespace:kube-system,Attempt:0,} returns sandbox id \"36057f5231f0582b4e760244ee291fb38e3cd14ef3a9d2c1495257654dcdd7a3\"" Jan 17 12:09:57.709238 kubelet[2607]: E0117 12:09:57.709215 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:57.711639 containerd[1471]: time="2025-01-17T12:09:57.711546102Z" level=info msg="CreateContainer within sandbox \"36057f5231f0582b4e760244ee291fb38e3cd14ef3a9d2c1495257654dcdd7a3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:09:57.729084 containerd[1471]: time="2025-01-17T12:09:57.729044557Z" level=info msg="CreateContainer within sandbox \"36057f5231f0582b4e760244ee291fb38e3cd14ef3a9d2c1495257654dcdd7a3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"516c42ae22134c389df982700ec762ffb1295c8cf95a0f44fa529bb3695ab471\"" Jan 17 12:09:57.729724 containerd[1471]: time="2025-01-17T12:09:57.729456608Z" level=info msg="StartContainer for \"516c42ae22134c389df982700ec762ffb1295c8cf95a0f44fa529bb3695ab471\"" Jan 17 12:09:57.763726 systemd[1]: Started cri-containerd-516c42ae22134c389df982700ec762ffb1295c8cf95a0f44fa529bb3695ab471.scope - libcontainer container 516c42ae22134c389df982700ec762ffb1295c8cf95a0f44fa529bb3695ab471. Jan 17 12:09:57.797582 containerd[1471]: time="2025-01-17T12:09:57.797542981Z" level=info msg="StartContainer for \"516c42ae22134c389df982700ec762ffb1295c8cf95a0f44fa529bb3695ab471\" returns successfully" Jan 17 12:09:57.801945 containerd[1471]: time="2025-01-17T12:09:57.801894529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-44ndx,Uid:7ee3cc7a-0160-498e-b0bf-433c4187c564,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:09:57.830575 containerd[1471]: time="2025-01-17T12:09:57.830419835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:57.830575 containerd[1471]: time="2025-01-17T12:09:57.830483820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:57.830575 containerd[1471]: time="2025-01-17T12:09:57.830568880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:57.830893 containerd[1471]: time="2025-01-17T12:09:57.830702923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:57.849803 systemd[1]: Started cri-containerd-e09ec79a508b07aca385525cad7db7f593ae3695d6711353ba6045737d2252d8.scope - libcontainer container e09ec79a508b07aca385525cad7db7f593ae3695d6711353ba6045737d2252d8. Jan 17 12:09:57.892228 containerd[1471]: time="2025-01-17T12:09:57.892167085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-44ndx,Uid:7ee3cc7a-0160-498e-b0bf-433c4187c564,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e09ec79a508b07aca385525cad7db7f593ae3695d6711353ba6045737d2252d8\"" Jan 17 12:09:57.894223 containerd[1471]: time="2025-01-17T12:09:57.894109816Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:09:58.354366 kubelet[2607]: E0117 12:09:58.354324 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:58.364009 kubelet[2607]: I0117 12:09:58.363844 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q8pd5" podStartSLOduration=1.363824563 podStartE2EDuration="1.363824563s" podCreationTimestamp="2025-01-17 12:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:09:58.363669088 +0000 UTC m=+16.151164842" watchObservedRunningTime="2025-01-17 12:09:58.363824563 +0000 UTC m=+16.151320317" Jan 17 12:09:59.635809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3825980889.mount: Deactivated successfully. Jan 17 12:10:00.065710 containerd[1471]: time="2025-01-17T12:10:00.065635907Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:00.067405 containerd[1471]: time="2025-01-17T12:10:00.067354072Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764273" Jan 17 12:10:00.069401 containerd[1471]: time="2025-01-17T12:10:00.069359694Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:00.071431 containerd[1471]: time="2025-01-17T12:10:00.071383041Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:00.072255 containerd[1471]: time="2025-01-17T12:10:00.072222550Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.178069732s" Jan 17 12:10:00.072290 containerd[1471]: time="2025-01-17T12:10:00.072255958Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:10:00.076742 containerd[1471]: time="2025-01-17T12:10:00.076707934Z" level=info msg="CreateContainer within sandbox \"e09ec79a508b07aca385525cad7db7f593ae3695d6711353ba6045737d2252d8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:10:00.089168 containerd[1471]: time="2025-01-17T12:10:00.089121517Z" level=info msg="CreateContainer within sandbox \"e09ec79a508b07aca385525cad7db7f593ae3695d6711353ba6045737d2252d8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"705afb37e9a4b94c3f20b4c38bd48ebc329d0bd27f426a43121cf2f1aa726933\"" Jan 17 12:10:00.089728 containerd[1471]: time="2025-01-17T12:10:00.089539772Z" level=info msg="StartContainer for \"705afb37e9a4b94c3f20b4c38bd48ebc329d0bd27f426a43121cf2f1aa726933\"" Jan 17 12:10:00.118831 systemd[1]: Started cri-containerd-705afb37e9a4b94c3f20b4c38bd48ebc329d0bd27f426a43121cf2f1aa726933.scope - libcontainer container 705afb37e9a4b94c3f20b4c38bd48ebc329d0bd27f426a43121cf2f1aa726933. Jan 17 12:10:00.152565 containerd[1471]: time="2025-01-17T12:10:00.152492924Z" level=info msg="StartContainer for \"705afb37e9a4b94c3f20b4c38bd48ebc329d0bd27f426a43121cf2f1aa726933\" returns successfully" Jan 17 12:10:00.372300 kubelet[2607]: I0117 12:10:00.372110 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-44ndx" podStartSLOduration=1.1901770489999999 podStartE2EDuration="3.372083264s" podCreationTimestamp="2025-01-17 12:09:57 +0000 UTC" firstStartedPulling="2025-01-17 12:09:57.893612357 +0000 UTC m=+15.681108111" lastFinishedPulling="2025-01-17 12:10:00.075518572 +0000 UTC m=+17.863014326" observedRunningTime="2025-01-17 12:10:00.37202347 +0000 UTC m=+18.159519234" watchObservedRunningTime="2025-01-17 12:10:00.372083264 +0000 UTC m=+18.159579038" Jan 17 12:10:03.412014 kubelet[2607]: I0117 12:10:03.411955 2607 topology_manager.go:215] "Topology Admit Handler" podUID="ed0cbc06-8ecd-4f76-8e30-070c9fdeb363" podNamespace="calico-system" podName="calico-typha-6594cbbb76-4bpbl" Jan 17 12:10:03.429956 systemd[1]: Created slice kubepods-besteffort-poded0cbc06_8ecd_4f76_8e30_070c9fdeb363.slice - libcontainer container kubepods-besteffort-poded0cbc06_8ecd_4f76_8e30_070c9fdeb363.slice. Jan 17 12:10:03.447755 kubelet[2607]: I0117 12:10:03.447691 2607 topology_manager.go:215] "Topology Admit Handler" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" podNamespace="calico-system" podName="calico-node-hm29z" Jan 17 12:10:03.457887 systemd[1]: Created slice kubepods-besteffort-pod20b3aef6_8302_4600_bbe2_09c056e53e6a.slice - libcontainer container kubepods-besteffort-pod20b3aef6_8302_4600_bbe2_09c056e53e6a.slice. Jan 17 12:10:03.521186 kubelet[2607]: I0117 12:10:03.521126 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed0cbc06-8ecd-4f76-8e30-070c9fdeb363-tigera-ca-bundle\") pod \"calico-typha-6594cbbb76-4bpbl\" (UID: \"ed0cbc06-8ecd-4f76-8e30-070c9fdeb363\") " pod="calico-system/calico-typha-6594cbbb76-4bpbl" Jan 17 12:10:03.521186 kubelet[2607]: I0117 12:10:03.521177 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ed0cbc06-8ecd-4f76-8e30-070c9fdeb363-typha-certs\") pod \"calico-typha-6594cbbb76-4bpbl\" (UID: \"ed0cbc06-8ecd-4f76-8e30-070c9fdeb363\") " pod="calico-system/calico-typha-6594cbbb76-4bpbl" Jan 17 12:10:03.521394 kubelet[2607]: I0117 12:10:03.521203 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvm29\" (UniqueName: \"kubernetes.io/projected/ed0cbc06-8ecd-4f76-8e30-070c9fdeb363-kube-api-access-tvm29\") pod \"calico-typha-6594cbbb76-4bpbl\" (UID: \"ed0cbc06-8ecd-4f76-8e30-070c9fdeb363\") " pod="calico-system/calico-typha-6594cbbb76-4bpbl" Jan 17 12:10:03.609661 kubelet[2607]: I0117 12:10:03.609323 2607 topology_manager.go:215] "Topology Admit Handler" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" podNamespace="calico-system" podName="csi-node-driver-xfsj8" Jan 17 12:10:03.610312 kubelet[2607]: E0117 12:10:03.609940 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:03.621889 kubelet[2607]: I0117 12:10:03.621420 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-cni-log-dir\") pod \"calico-node-hm29z\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " pod="calico-system/calico-node-hm29z" Jan 17 12:10:03.621889 kubelet[2607]: I0117 12:10:03.621543 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr48w\" (UniqueName: \"kubernetes.io/projected/20b3aef6-8302-4600-bbe2-09c056e53e6a-kube-api-access-kr48w\") pod \"calico-node-hm29z\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " pod="calico-system/calico-node-hm29z" Jan 17 12:10:03.621889 kubelet[2607]: I0117 12:10:03.621573 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-cni-bin-dir\") pod \"calico-node-hm29z\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " pod="calico-system/calico-node-hm29z" Jan 17 12:10:03.621889 kubelet[2607]: I0117 12:10:03.621606 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-lib-modules\") pod \"calico-node-hm29z\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " pod="calico-system/calico-node-hm29z" Jan 17 12:10:03.621889 kubelet[2607]: I0117 12:10:03.621622 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-xtables-lock\") pod \"calico-node-hm29z\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " pod="calico-system/calico-node-hm29z" Jan 17 12:10:03.622199 kubelet[2607]: I0117 12:10:03.621637 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20b3aef6-8302-4600-bbe2-09c056e53e6a-tigera-ca-bundle\") pod \"calico-node-hm29z\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " pod="calico-system/calico-node-hm29z" Jan 17 12:10:03.622199 kubelet[2607]: I0117 12:10:03.621653 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-var-lib-calico\") pod \"calico-node-hm29z\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " pod="calico-system/calico-node-hm29z" Jan 17 12:10:03.622199 kubelet[2607]: I0117 12:10:03.621678 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-policysync\") pod \"calico-node-hm29z\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " pod="calico-system/calico-node-hm29z" Jan 17 12:10:03.622199 kubelet[2607]: I0117 12:10:03.621692 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/20b3aef6-8302-4600-bbe2-09c056e53e6a-node-certs\") pod \"calico-node-hm29z\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " pod="calico-system/calico-node-hm29z" Jan 17 12:10:03.622199 kubelet[2607]: I0117 12:10:03.621722 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-flexvol-driver-host\") pod \"calico-node-hm29z\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " pod="calico-system/calico-node-hm29z" Jan 17 12:10:03.622463 kubelet[2607]: I0117 12:10:03.621739 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-var-run-calico\") pod \"calico-node-hm29z\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " pod="calico-system/calico-node-hm29z" Jan 17 12:10:03.622463 kubelet[2607]: I0117 12:10:03.621755 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-cni-net-dir\") pod \"calico-node-hm29z\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " pod="calico-system/calico-node-hm29z" Jan 17 12:10:03.722240 kubelet[2607]: I0117 12:10:03.722195 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/83490bae-2f03-49cc-b16c-ff7f265ed80b-varrun\") pod \"csi-node-driver-xfsj8\" (UID: \"83490bae-2f03-49cc-b16c-ff7f265ed80b\") " pod="calico-system/csi-node-driver-xfsj8" Jan 17 12:10:03.722388 kubelet[2607]: I0117 12:10:03.722243 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83490bae-2f03-49cc-b16c-ff7f265ed80b-kubelet-dir\") pod \"csi-node-driver-xfsj8\" (UID: \"83490bae-2f03-49cc-b16c-ff7f265ed80b\") " pod="calico-system/csi-node-driver-xfsj8" Jan 17 12:10:03.722388 kubelet[2607]: I0117 12:10:03.722270 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srzxn\" (UniqueName: \"kubernetes.io/projected/83490bae-2f03-49cc-b16c-ff7f265ed80b-kube-api-access-srzxn\") pod \"csi-node-driver-xfsj8\" (UID: \"83490bae-2f03-49cc-b16c-ff7f265ed80b\") " pod="calico-system/csi-node-driver-xfsj8" Jan 17 12:10:03.722388 kubelet[2607]: I0117 12:10:03.722349 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/83490bae-2f03-49cc-b16c-ff7f265ed80b-socket-dir\") pod \"csi-node-driver-xfsj8\" (UID: \"83490bae-2f03-49cc-b16c-ff7f265ed80b\") " pod="calico-system/csi-node-driver-xfsj8" Jan 17 12:10:03.722495 kubelet[2607]: I0117 12:10:03.722474 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/83490bae-2f03-49cc-b16c-ff7f265ed80b-registration-dir\") pod \"csi-node-driver-xfsj8\" (UID: \"83490bae-2f03-49cc-b16c-ff7f265ed80b\") " pod="calico-system/csi-node-driver-xfsj8" Jan 17 12:10:03.726539 kubelet[2607]: E0117 12:10:03.725523 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.726539 kubelet[2607]: W0117 12:10:03.726509 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.726908 kubelet[2607]: E0117 12:10:03.726545 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.727683 kubelet[2607]: E0117 12:10:03.727270 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.727683 kubelet[2607]: W0117 12:10:03.727296 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.727683 kubelet[2607]: E0117 12:10:03.727318 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.731377 kubelet[2607]: E0117 12:10:03.731320 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.731377 kubelet[2607]: W0117 12:10:03.731335 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.731377 kubelet[2607]: E0117 12:10:03.731349 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.735920 kubelet[2607]: E0117 12:10:03.735895 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:03.739872 containerd[1471]: time="2025-01-17T12:10:03.737293456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6594cbbb76-4bpbl,Uid:ed0cbc06-8ecd-4f76-8e30-070c9fdeb363,Namespace:calico-system,Attempt:0,}" Jan 17 12:10:03.760489 kubelet[2607]: E0117 12:10:03.760437 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:03.761073 containerd[1471]: time="2025-01-17T12:10:03.761032519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hm29z,Uid:20b3aef6-8302-4600-bbe2-09c056e53e6a,Namespace:calico-system,Attempt:0,}" Jan 17 12:10:03.807205 containerd[1471]: time="2025-01-17T12:10:03.807089913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:03.807205 containerd[1471]: time="2025-01-17T12:10:03.807164275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:03.807205 containerd[1471]: time="2025-01-17T12:10:03.807201881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:03.808434 containerd[1471]: time="2025-01-17T12:10:03.808346221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:03.809787 containerd[1471]: time="2025-01-17T12:10:03.809262666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:03.809787 containerd[1471]: time="2025-01-17T12:10:03.809340094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:03.809787 containerd[1471]: time="2025-01-17T12:10:03.809360996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:03.809787 containerd[1471]: time="2025-01-17T12:10:03.809582116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:03.823575 kubelet[2607]: E0117 12:10:03.823461 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.823575 kubelet[2607]: W0117 12:10:03.823487 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.823575 kubelet[2607]: E0117 12:10:03.823513 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.823817 kubelet[2607]: E0117 12:10:03.823728 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.823817 kubelet[2607]: W0117 12:10:03.823746 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.823817 kubelet[2607]: E0117 12:10:03.823760 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.825234 kubelet[2607]: E0117 12:10:03.824978 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.825234 kubelet[2607]: W0117 12:10:03.824994 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.825234 kubelet[2607]: E0117 12:10:03.825010 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.825234 kubelet[2607]: E0117 12:10:03.825229 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.825234 kubelet[2607]: W0117 12:10:03.825239 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.825412 kubelet[2607]: E0117 12:10:03.825250 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.825915 kubelet[2607]: E0117 12:10:03.825655 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.825915 kubelet[2607]: W0117 12:10:03.825678 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.825915 kubelet[2607]: E0117 12:10:03.825784 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.826027 kubelet[2607]: E0117 12:10:03.825889 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.826027 kubelet[2607]: W0117 12:10:03.825931 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.826118 kubelet[2607]: E0117 12:10:03.826042 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.827510 kubelet[2607]: E0117 12:10:03.827484 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.827510 kubelet[2607]: W0117 12:10:03.827504 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.830526 kubelet[2607]: E0117 12:10:03.827535 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.830526 kubelet[2607]: E0117 12:10:03.827880 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.830526 kubelet[2607]: W0117 12:10:03.827891 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.830526 kubelet[2607]: E0117 12:10:03.827985 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.830526 kubelet[2607]: E0117 12:10:03.828237 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.830526 kubelet[2607]: W0117 12:10:03.828245 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.830526 kubelet[2607]: E0117 12:10:03.828327 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.830526 kubelet[2607]: E0117 12:10:03.828488 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.830526 kubelet[2607]: W0117 12:10:03.828498 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.830526 kubelet[2607]: E0117 12:10:03.828599 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.830793 kubelet[2607]: E0117 12:10:03.828872 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.830793 kubelet[2607]: W0117 12:10:03.828879 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.830793 kubelet[2607]: E0117 12:10:03.828918 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.830793 kubelet[2607]: E0117 12:10:03.829115 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.830793 kubelet[2607]: W0117 12:10:03.829127 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.830793 kubelet[2607]: E0117 12:10:03.829187 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.830793 kubelet[2607]: E0117 12:10:03.829432 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.830793 kubelet[2607]: W0117 12:10:03.829460 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.830793 kubelet[2607]: E0117 12:10:03.829516 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.830793 kubelet[2607]: E0117 12:10:03.829761 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.831026 kubelet[2607]: W0117 12:10:03.829769 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.831026 kubelet[2607]: E0117 12:10:03.829878 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.831026 kubelet[2607]: E0117 12:10:03.829967 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.831026 kubelet[2607]: W0117 12:10:03.829976 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.831026 kubelet[2607]: E0117 12:10:03.829986 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.831026 kubelet[2607]: E0117 12:10:03.830246 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.831026 kubelet[2607]: W0117 12:10:03.830255 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.831026 kubelet[2607]: E0117 12:10:03.830268 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.830856 systemd[1]: Started cri-containerd-c5f9d2bfd7550fa4e09dad3a4d965a0ab55a74967a43c393668e2560230257c3.scope - libcontainer container c5f9d2bfd7550fa4e09dad3a4d965a0ab55a74967a43c393668e2560230257c3. Jan 17 12:10:03.831867 kubelet[2607]: E0117 12:10:03.830658 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.831867 kubelet[2607]: W0117 12:10:03.831410 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.831867 kubelet[2607]: E0117 12:10:03.831630 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.832343 kubelet[2607]: E0117 12:10:03.832308 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.832343 kubelet[2607]: W0117 12:10:03.832341 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.832611 kubelet[2607]: E0117 12:10:03.832573 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.833192 kubelet[2607]: E0117 12:10:03.833178 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.833192 kubelet[2607]: W0117 12:10:03.833192 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.834334 kubelet[2607]: E0117 12:10:03.834312 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.834426 kubelet[2607]: E0117 12:10:03.834414 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.834470 kubelet[2607]: W0117 12:10:03.834424 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.834543 kubelet[2607]: E0117 12:10:03.834524 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.834825 kubelet[2607]: E0117 12:10:03.834812 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.834825 kubelet[2607]: W0117 12:10:03.834824 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.835726 kubelet[2607]: E0117 12:10:03.835618 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.835846 kubelet[2607]: E0117 12:10:03.835830 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.835846 kubelet[2607]: W0117 12:10:03.835843 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.835901 kubelet[2607]: E0117 12:10:03.835856 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.836113 kubelet[2607]: E0117 12:10:03.836100 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.836160 kubelet[2607]: W0117 12:10:03.836111 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.836228 kubelet[2607]: E0117 12:10:03.836211 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.837172 kubelet[2607]: E0117 12:10:03.837073 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.837172 kubelet[2607]: W0117 12:10:03.837086 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.837172 kubelet[2607]: E0117 12:10:03.837165 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.837703 kubelet[2607]: E0117 12:10:03.837424 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.837703 kubelet[2607]: W0117 12:10:03.837461 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.837703 kubelet[2607]: E0117 12:10:03.837475 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.837917 systemd[1]: Started cri-containerd-7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04.scope - libcontainer container 7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04. Jan 17 12:10:03.846893 kubelet[2607]: E0117 12:10:03.846862 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:03.846893 kubelet[2607]: W0117 12:10:03.846881 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:03.847026 kubelet[2607]: E0117 12:10:03.846904 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:03.862579 containerd[1471]: time="2025-01-17T12:10:03.862533353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hm29z,Uid:20b3aef6-8302-4600-bbe2-09c056e53e6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\"" Jan 17 12:10:03.877536 containerd[1471]: time="2025-01-17T12:10:03.877496882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6594cbbb76-4bpbl,Uid:ed0cbc06-8ecd-4f76-8e30-070c9fdeb363,Namespace:calico-system,Attempt:0,} returns sandbox id \"c5f9d2bfd7550fa4e09dad3a4d965a0ab55a74967a43c393668e2560230257c3\"" Jan 17 12:10:03.943551 kubelet[2607]: E0117 12:10:03.943501 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:03.943722 kubelet[2607]: E0117 12:10:03.943517 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:03.955223 containerd[1471]: time="2025-01-17T12:10:03.955175244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:10:05.314289 kubelet[2607]: E0117 12:10:05.314230 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:06.487795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2738453376.mount: Deactivated successfully. Jan 17 12:10:07.313921 kubelet[2607]: E0117 12:10:07.313868 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:08.193398 containerd[1471]: time="2025-01-17T12:10:08.193330105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:08.194150 containerd[1471]: time="2025-01-17T12:10:08.194114456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:10:08.195291 containerd[1471]: time="2025-01-17T12:10:08.195251741Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:08.197205 containerd[1471]: time="2025-01-17T12:10:08.197164770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:08.197987 containerd[1471]: time="2025-01-17T12:10:08.197954313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 4.242729678s" Jan 17 12:10:08.198052 containerd[1471]: time="2025-01-17T12:10:08.197990495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:10:08.199071 containerd[1471]: time="2025-01-17T12:10:08.199027200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:10:08.220568 containerd[1471]: time="2025-01-17T12:10:08.220504139Z" level=info msg="CreateContainer within sandbox \"c5f9d2bfd7550fa4e09dad3a4d965a0ab55a74967a43c393668e2560230257c3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:10:08.236096 containerd[1471]: time="2025-01-17T12:10:08.236025998Z" level=info msg="CreateContainer within sandbox \"c5f9d2bfd7550fa4e09dad3a4d965a0ab55a74967a43c393668e2560230257c3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"336e2ef33f8bec43a45d6d78155e6b68c337e67b5fc00b99dd1c5b6f2127fc4a\"" Jan 17 12:10:08.236641 containerd[1471]: time="2025-01-17T12:10:08.236579952Z" level=info msg="StartContainer for \"336e2ef33f8bec43a45d6d78155e6b68c337e67b5fc00b99dd1c5b6f2127fc4a\"" Jan 17 12:10:08.268732 systemd[1]: Started cri-containerd-336e2ef33f8bec43a45d6d78155e6b68c337e67b5fc00b99dd1c5b6f2127fc4a.scope - libcontainer container 336e2ef33f8bec43a45d6d78155e6b68c337e67b5fc00b99dd1c5b6f2127fc4a. Jan 17 12:10:08.309648 containerd[1471]: time="2025-01-17T12:10:08.309198956Z" level=info msg="StartContainer for \"336e2ef33f8bec43a45d6d78155e6b68c337e67b5fc00b99dd1c5b6f2127fc4a\" returns successfully" Jan 17 12:10:08.392630 kubelet[2607]: E0117 12:10:08.392562 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:08.453417 kubelet[2607]: E0117 12:10:08.453364 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.453417 kubelet[2607]: W0117 12:10:08.453396 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.453417 kubelet[2607]: E0117 12:10:08.453421 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.453695 kubelet[2607]: E0117 12:10:08.453679 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.453695 kubelet[2607]: W0117 12:10:08.453693 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.453754 kubelet[2607]: E0117 12:10:08.453704 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.453952 kubelet[2607]: E0117 12:10:08.453927 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.453952 kubelet[2607]: W0117 12:10:08.453943 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.454015 kubelet[2607]: E0117 12:10:08.453954 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.454185 kubelet[2607]: E0117 12:10:08.454163 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.454185 kubelet[2607]: W0117 12:10:08.454177 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.454250 kubelet[2607]: E0117 12:10:08.454186 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.454395 kubelet[2607]: E0117 12:10:08.454370 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.454395 kubelet[2607]: W0117 12:10:08.454385 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.454458 kubelet[2607]: E0117 12:10:08.454395 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.454573 kubelet[2607]: E0117 12:10:08.454558 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.454573 kubelet[2607]: W0117 12:10:08.454571 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.454641 kubelet[2607]: E0117 12:10:08.454580 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.454782 kubelet[2607]: E0117 12:10:08.454760 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.454782 kubelet[2607]: W0117 12:10:08.454774 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.454828 kubelet[2607]: E0117 12:10:08.454784 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.455006 kubelet[2607]: E0117 12:10:08.454989 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.455006 kubelet[2607]: W0117 12:10:08.455003 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.455066 kubelet[2607]: E0117 12:10:08.455014 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.455214 kubelet[2607]: E0117 12:10:08.455199 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.455214 kubelet[2607]: W0117 12:10:08.455211 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.455272 kubelet[2607]: E0117 12:10:08.455220 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.455413 kubelet[2607]: E0117 12:10:08.455398 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.455413 kubelet[2607]: W0117 12:10:08.455410 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.455465 kubelet[2607]: E0117 12:10:08.455419 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.455617 kubelet[2607]: E0117 12:10:08.455602 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.455617 kubelet[2607]: W0117 12:10:08.455615 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.455670 kubelet[2607]: E0117 12:10:08.455624 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.455811 kubelet[2607]: E0117 12:10:08.455797 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.455834 kubelet[2607]: W0117 12:10:08.455811 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.455834 kubelet[2607]: E0117 12:10:08.455821 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.456046 kubelet[2607]: E0117 12:10:08.456027 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.456046 kubelet[2607]: W0117 12:10:08.456042 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.456189 kubelet[2607]: E0117 12:10:08.456053 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.456291 kubelet[2607]: E0117 12:10:08.456278 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.456291 kubelet[2607]: W0117 12:10:08.456289 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.456360 kubelet[2607]: E0117 12:10:08.456299 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.456524 kubelet[2607]: E0117 12:10:08.456497 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.456524 kubelet[2607]: W0117 12:10:08.456508 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.456524 kubelet[2607]: E0117 12:10:08.456518 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.464854 kubelet[2607]: E0117 12:10:08.464815 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.464854 kubelet[2607]: W0117 12:10:08.464841 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.464941 kubelet[2607]: E0117 12:10:08.464868 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.465158 kubelet[2607]: E0117 12:10:08.465121 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.465158 kubelet[2607]: W0117 12:10:08.465142 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.465158 kubelet[2607]: E0117 12:10:08.465160 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.465474 kubelet[2607]: E0117 12:10:08.465457 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.465474 kubelet[2607]: W0117 12:10:08.465473 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.465529 kubelet[2607]: E0117 12:10:08.465489 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.465774 kubelet[2607]: E0117 12:10:08.465758 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.465774 kubelet[2607]: W0117 12:10:08.465772 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.465835 kubelet[2607]: E0117 12:10:08.465789 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.466069 kubelet[2607]: E0117 12:10:08.466042 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.466069 kubelet[2607]: W0117 12:10:08.466061 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.466215 kubelet[2607]: E0117 12:10:08.466083 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.466303 kubelet[2607]: E0117 12:10:08.466285 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.466303 kubelet[2607]: W0117 12:10:08.466298 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.466378 kubelet[2607]: E0117 12:10:08.466328 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.466562 kubelet[2607]: E0117 12:10:08.466540 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.466562 kubelet[2607]: W0117 12:10:08.466557 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.467274 kubelet[2607]: E0117 12:10:08.466601 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.467274 kubelet[2607]: E0117 12:10:08.466841 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.467274 kubelet[2607]: W0117 12:10:08.466851 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.467274 kubelet[2607]: E0117 12:10:08.466869 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.467274 kubelet[2607]: E0117 12:10:08.467102 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.467274 kubelet[2607]: W0117 12:10:08.467113 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.467274 kubelet[2607]: E0117 12:10:08.467132 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.467457 kubelet[2607]: E0117 12:10:08.467352 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.467457 kubelet[2607]: W0117 12:10:08.467362 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.467457 kubelet[2607]: E0117 12:10:08.467377 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.467644 kubelet[2607]: E0117 12:10:08.467628 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.467644 kubelet[2607]: W0117 12:10:08.467640 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.467697 kubelet[2607]: E0117 12:10:08.467655 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.467839 kubelet[2607]: E0117 12:10:08.467821 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.467839 kubelet[2607]: W0117 12:10:08.467832 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.467839 kubelet[2607]: E0117 12:10:08.467842 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.468095 kubelet[2607]: E0117 12:10:08.468078 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.468095 kubelet[2607]: W0117 12:10:08.468092 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.468155 kubelet[2607]: E0117 12:10:08.468109 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.468401 kubelet[2607]: E0117 12:10:08.468381 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.468401 kubelet[2607]: W0117 12:10:08.468392 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.468401 kubelet[2607]: E0117 12:10:08.468404 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.468767 kubelet[2607]: E0117 12:10:08.468750 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.468794 kubelet[2607]: W0117 12:10:08.468766 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.468794 kubelet[2607]: E0117 12:10:08.468782 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.468974 kubelet[2607]: E0117 12:10:08.468959 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.468974 kubelet[2607]: W0117 12:10:08.468971 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.469039 kubelet[2607]: E0117 12:10:08.468983 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.469200 kubelet[2607]: E0117 12:10:08.469184 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.469200 kubelet[2607]: W0117 12:10:08.469197 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.469261 kubelet[2607]: E0117 12:10:08.469213 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:08.469464 kubelet[2607]: E0117 12:10:08.469446 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:08.469464 kubelet[2607]: W0117 12:10:08.469459 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:08.469527 kubelet[2607]: E0117 12:10:08.469469 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.314036 kubelet[2607]: E0117 12:10:09.313984 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:09.381626 kubelet[2607]: I0117 12:10:09.381577 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:10:09.382370 kubelet[2607]: E0117 12:10:09.382290 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:09.464641 kubelet[2607]: E0117 12:10:09.464606 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.464641 kubelet[2607]: W0117 12:10:09.464632 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.465196 kubelet[2607]: E0117 12:10:09.464660 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.465196 kubelet[2607]: E0117 12:10:09.464858 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.465196 kubelet[2607]: W0117 12:10:09.464865 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.465196 kubelet[2607]: E0117 12:10:09.464873 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.465196 kubelet[2607]: E0117 12:10:09.465155 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.465196 kubelet[2607]: W0117 12:10:09.465187 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.465372 kubelet[2607]: E0117 12:10:09.465218 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.465563 kubelet[2607]: E0117 12:10:09.465546 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.465563 kubelet[2607]: W0117 12:10:09.465558 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.465652 kubelet[2607]: E0117 12:10:09.465568 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.465794 kubelet[2607]: E0117 12:10:09.465779 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.465794 kubelet[2607]: W0117 12:10:09.465790 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.465851 kubelet[2607]: E0117 12:10:09.465798 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.466023 kubelet[2607]: E0117 12:10:09.466007 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.466023 kubelet[2607]: W0117 12:10:09.466020 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.466082 kubelet[2607]: E0117 12:10:09.466031 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.466246 kubelet[2607]: E0117 12:10:09.466232 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.466246 kubelet[2607]: W0117 12:10:09.466243 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.466307 kubelet[2607]: E0117 12:10:09.466251 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.466466 kubelet[2607]: E0117 12:10:09.466451 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.466466 kubelet[2607]: W0117 12:10:09.466461 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.466607 kubelet[2607]: E0117 12:10:09.466469 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.466700 kubelet[2607]: E0117 12:10:09.466685 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.466700 kubelet[2607]: W0117 12:10:09.466695 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.466765 kubelet[2607]: E0117 12:10:09.466703 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.466897 kubelet[2607]: E0117 12:10:09.466882 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.466897 kubelet[2607]: W0117 12:10:09.466893 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.466975 kubelet[2607]: E0117 12:10:09.466901 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.467090 kubelet[2607]: E0117 12:10:09.467075 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.467090 kubelet[2607]: W0117 12:10:09.467086 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.467143 kubelet[2607]: E0117 12:10:09.467093 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.467304 kubelet[2607]: E0117 12:10:09.467287 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.467304 kubelet[2607]: W0117 12:10:09.467298 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.467374 kubelet[2607]: E0117 12:10:09.467307 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.467523 kubelet[2607]: E0117 12:10:09.467508 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.467523 kubelet[2607]: W0117 12:10:09.467520 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.467599 kubelet[2607]: E0117 12:10:09.467528 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.467749 kubelet[2607]: E0117 12:10:09.467733 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.467749 kubelet[2607]: W0117 12:10:09.467746 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.467809 kubelet[2607]: E0117 12:10:09.467756 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.467978 kubelet[2607]: E0117 12:10:09.467962 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.467978 kubelet[2607]: W0117 12:10:09.467974 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.468035 kubelet[2607]: E0117 12:10:09.467983 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.472237 kubelet[2607]: E0117 12:10:09.472221 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.472237 kubelet[2607]: W0117 12:10:09.472234 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.472343 kubelet[2607]: E0117 12:10:09.472245 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.482937 kubelet[2607]: E0117 12:10:09.482914 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.482937 kubelet[2607]: W0117 12:10:09.482928 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.483021 kubelet[2607]: E0117 12:10:09.482944 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.483174 kubelet[2607]: E0117 12:10:09.483161 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.483174 kubelet[2607]: W0117 12:10:09.483173 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.483252 kubelet[2607]: E0117 12:10:09.483186 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.483484 kubelet[2607]: E0117 12:10:09.483470 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.483484 kubelet[2607]: W0117 12:10:09.483482 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.483562 kubelet[2607]: E0117 12:10:09.483495 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.483751 kubelet[2607]: E0117 12:10:09.483729 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.483751 kubelet[2607]: W0117 12:10:09.483742 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.483830 kubelet[2607]: E0117 12:10:09.483756 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.484029 kubelet[2607]: E0117 12:10:09.484008 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.484029 kubelet[2607]: W0117 12:10:09.484020 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.484093 kubelet[2607]: E0117 12:10:09.484060 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.484241 kubelet[2607]: E0117 12:10:09.484228 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.484241 kubelet[2607]: W0117 12:10:09.484238 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.484319 kubelet[2607]: E0117 12:10:09.484264 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.484463 kubelet[2607]: E0117 12:10:09.484451 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.484463 kubelet[2607]: W0117 12:10:09.484462 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.484538 kubelet[2607]: E0117 12:10:09.484487 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.484700 kubelet[2607]: E0117 12:10:09.484688 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.484700 kubelet[2607]: W0117 12:10:09.484700 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.484753 kubelet[2607]: E0117 12:10:09.484714 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.484942 kubelet[2607]: E0117 12:10:09.484925 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.484942 kubelet[2607]: W0117 12:10:09.484938 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.484998 kubelet[2607]: E0117 12:10:09.484953 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.485172 kubelet[2607]: E0117 12:10:09.485155 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.485172 kubelet[2607]: W0117 12:10:09.485166 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.485241 kubelet[2607]: E0117 12:10:09.485178 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.485427 kubelet[2607]: E0117 12:10:09.485411 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.485427 kubelet[2607]: W0117 12:10:09.485426 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.485500 kubelet[2607]: E0117 12:10:09.485443 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.485678 kubelet[2607]: E0117 12:10:09.485658 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.485678 kubelet[2607]: W0117 12:10:09.485671 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.485765 kubelet[2607]: E0117 12:10:09.485681 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.485865 kubelet[2607]: E0117 12:10:09.485850 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.485865 kubelet[2607]: W0117 12:10:09.485860 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.485865 kubelet[2607]: E0117 12:10:09.485868 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.486081 kubelet[2607]: E0117 12:10:09.486065 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.486081 kubelet[2607]: W0117 12:10:09.486079 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.486161 kubelet[2607]: E0117 12:10:09.486092 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.486660 kubelet[2607]: E0117 12:10:09.486634 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.486660 kubelet[2607]: W0117 12:10:09.486650 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.486660 kubelet[2607]: E0117 12:10:09.486661 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.489356 kubelet[2607]: E0117 12:10:09.489333 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.489356 kubelet[2607]: W0117 12:10:09.489349 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.489482 kubelet[2607]: E0117 12:10:09.489366 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:09.489612 kubelet[2607]: E0117 12:10:09.489574 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:10:09.489612 kubelet[2607]: W0117 12:10:09.489602 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:10:09.489699 kubelet[2607]: E0117 12:10:09.489616 2607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:10:10.555278 containerd[1471]: time="2025-01-17T12:10:10.555222001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:10.556536 containerd[1471]: time="2025-01-17T12:10:10.556464631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:10:10.559336 containerd[1471]: time="2025-01-17T12:10:10.558638928Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:10.561632 containerd[1471]: time="2025-01-17T12:10:10.560846451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:10.564639 containerd[1471]: time="2025-01-17T12:10:10.561864471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.362786561s" Jan 17 12:10:10.564639 containerd[1471]: time="2025-01-17T12:10:10.561921094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:10:10.569559 containerd[1471]: time="2025-01-17T12:10:10.569510623Z" level=info msg="CreateContainer within sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:10:10.844307 containerd[1471]: time="2025-01-17T12:10:10.844169568Z" level=info msg="CreateContainer within sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c5f29c1e6edd749af4095f9e21fd5ef896960b3f3bee3152ae090411d9380f2c\"" Jan 17 12:10:10.844863 containerd[1471]: time="2025-01-17T12:10:10.844828689Z" level=info msg="StartContainer for \"c5f29c1e6edd749af4095f9e21fd5ef896960b3f3bee3152ae090411d9380f2c\"" Jan 17 12:10:10.876731 systemd[1]: Started cri-containerd-c5f29c1e6edd749af4095f9e21fd5ef896960b3f3bee3152ae090411d9380f2c.scope - libcontainer container c5f29c1e6edd749af4095f9e21fd5ef896960b3f3bee3152ae090411d9380f2c. Jan 17 12:10:10.922207 systemd[1]: cri-containerd-c5f29c1e6edd749af4095f9e21fd5ef896960b3f3bee3152ae090411d9380f2c.scope: Deactivated successfully. Jan 17 12:10:11.164018 systemd[1]: Started sshd@7-10.0.0.49:22-10.0.0.1:57140.service - OpenSSH per-connection server daemon (10.0.0.1:57140). Jan 17 12:10:11.314814 kubelet[2607]: E0117 12:10:11.314759 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:11.334145 containerd[1471]: time="2025-01-17T12:10:11.334081787Z" level=info msg="StartContainer for \"c5f29c1e6edd749af4095f9e21fd5ef896960b3f3bee3152ae090411d9380f2c\" returns successfully" Jan 17 12:10:11.357631 sshd[3289]: Accepted publickey for core from 10.0.0.1 port 57140 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:11.359621 sshd[3289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:11.364837 systemd-logind[1458]: New session 8 of user core. Jan 17 12:10:11.377901 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:10:11.405668 kubelet[2607]: E0117 12:10:11.386333 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:11.409458 kubelet[2607]: I0117 12:10:11.409398 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6594cbbb76-4bpbl" podStartSLOduration=4.156030076 podStartE2EDuration="8.409373877s" podCreationTimestamp="2025-01-17 12:10:03 +0000 UTC" firstStartedPulling="2025-01-17 12:10:03.945367169 +0000 UTC m=+21.732862923" lastFinishedPulling="2025-01-17 12:10:08.19871096 +0000 UTC m=+25.986206724" observedRunningTime="2025-01-17 12:10:08.420452386 +0000 UTC m=+26.207948141" watchObservedRunningTime="2025-01-17 12:10:11.409373877 +0000 UTC m=+29.196869641" Jan 17 12:10:11.590790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5f29c1e6edd749af4095f9e21fd5ef896960b3f3bee3152ae090411d9380f2c-rootfs.mount: Deactivated successfully. Jan 17 12:10:11.875124 sshd[3289]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:11.879344 systemd[1]: sshd@7-10.0.0.49:22-10.0.0.1:57140.service: Deactivated successfully. Jan 17 12:10:11.881821 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:10:11.882632 containerd[1471]: time="2025-01-17T12:10:11.882501844Z" level=info msg="shim disconnected" id=c5f29c1e6edd749af4095f9e21fd5ef896960b3f3bee3152ae090411d9380f2c namespace=k8s.io Jan 17 12:10:11.882632 containerd[1471]: time="2025-01-17T12:10:11.882609288Z" level=warning msg="cleaning up after shim disconnected" id=c5f29c1e6edd749af4095f9e21fd5ef896960b3f3bee3152ae090411d9380f2c namespace=k8s.io Jan 17 12:10:11.882632 containerd[1471]: time="2025-01-17T12:10:11.882632595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:10:11.882995 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:10:11.883958 systemd-logind[1458]: Removed session 8. Jan 17 12:10:12.391252 kubelet[2607]: E0117 12:10:12.390798 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:12.392111 containerd[1471]: time="2025-01-17T12:10:12.391637943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:10:13.314224 kubelet[2607]: E0117 12:10:13.314157 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:15.315038 kubelet[2607]: E0117 12:10:15.314964 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:16.888577 systemd[1]: Started sshd@8-10.0.0.49:22-10.0.0.1:57156.service - OpenSSH per-connection server daemon (10.0.0.1:57156). Jan 17 12:10:16.931612 sshd[3335]: Accepted publickey for core from 10.0.0.1 port 57156 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:16.933349 sshd[3335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:16.938256 systemd-logind[1458]: New session 9 of user core. Jan 17 12:10:16.944759 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:10:17.066632 sshd[3335]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:17.071452 systemd[1]: sshd@8-10.0.0.49:22-10.0.0.1:57156.service: Deactivated successfully. Jan 17 12:10:17.073942 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:10:17.074805 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:10:17.075869 systemd-logind[1458]: Removed session 9. Jan 17 12:10:17.314475 kubelet[2607]: E0117 12:10:17.314403 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:19.314299 kubelet[2607]: E0117 12:10:19.314216 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:19.521991 containerd[1471]: time="2025-01-17T12:10:19.521896005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:19.522754 containerd[1471]: time="2025-01-17T12:10:19.522686757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:10:19.524270 containerd[1471]: time="2025-01-17T12:10:19.524228523Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:19.526493 containerd[1471]: time="2025-01-17T12:10:19.526434310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:19.527145 containerd[1471]: time="2025-01-17T12:10:19.527105115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 7.135429028s" Jan 17 12:10:19.527145 containerd[1471]: time="2025-01-17T12:10:19.527136186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:10:19.529445 containerd[1471]: time="2025-01-17T12:10:19.529424407Z" level=info msg="CreateContainer within sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:10:19.546428 containerd[1471]: time="2025-01-17T12:10:19.546377285Z" level=info msg="CreateContainer within sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2eca933ffcc9ae63b0450ee549d366ab107794871058734d94e442b659c2ba15\"" Jan 17 12:10:19.547041 containerd[1471]: time="2025-01-17T12:10:19.547004635Z" level=info msg="StartContainer for \"2eca933ffcc9ae63b0450ee549d366ab107794871058734d94e442b659c2ba15\"" Jan 17 12:10:19.588865 systemd[1]: Started cri-containerd-2eca933ffcc9ae63b0450ee549d366ab107794871058734d94e442b659c2ba15.scope - libcontainer container 2eca933ffcc9ae63b0450ee549d366ab107794871058734d94e442b659c2ba15. Jan 17 12:10:19.621696 containerd[1471]: time="2025-01-17T12:10:19.621584070Z" level=info msg="StartContainer for \"2eca933ffcc9ae63b0450ee549d366ab107794871058734d94e442b659c2ba15\" returns successfully" Jan 17 12:10:20.407124 kubelet[2607]: E0117 12:10:20.407085 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:21.314337 kubelet[2607]: E0117 12:10:21.314261 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:21.409627 kubelet[2607]: E0117 12:10:21.409556 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:21.628701 systemd[1]: cri-containerd-2eca933ffcc9ae63b0450ee549d366ab107794871058734d94e442b659c2ba15.scope: Deactivated successfully. Jan 17 12:10:21.633552 kubelet[2607]: I0117 12:10:21.633499 2607 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:10:21.654713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2eca933ffcc9ae63b0450ee549d366ab107794871058734d94e442b659c2ba15-rootfs.mount: Deactivated successfully. Jan 17 12:10:21.675743 containerd[1471]: time="2025-01-17T12:10:21.675637657Z" level=info msg="shim disconnected" id=2eca933ffcc9ae63b0450ee549d366ab107794871058734d94e442b659c2ba15 namespace=k8s.io Jan 17 12:10:21.675743 containerd[1471]: time="2025-01-17T12:10:21.675729548Z" level=warning msg="cleaning up after shim disconnected" id=2eca933ffcc9ae63b0450ee549d366ab107794871058734d94e442b659c2ba15 namespace=k8s.io Jan 17 12:10:21.675743 containerd[1471]: time="2025-01-17T12:10:21.675742173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:10:21.679670 kubelet[2607]: I0117 12:10:21.679617 2607 topology_manager.go:215] "Topology Admit Handler" podUID="7d963b6e-e967-461b-88c1-043d231c7107" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hmksg" Jan 17 12:10:21.687531 systemd[1]: Created slice kubepods-burstable-pod7d963b6e_e967_461b_88c1_043d231c7107.slice - libcontainer container kubepods-burstable-pod7d963b6e_e967_461b_88c1_043d231c7107.slice. Jan 17 12:10:21.716410 kubelet[2607]: I0117 12:10:21.716363 2607 topology_manager.go:215] "Topology Admit Handler" podUID="09d6eb3a-b020-453e-a3b2-1c2857fad614" podNamespace="calico-system" podName="calico-kube-controllers-6d754b8cc8-79mq6" Jan 17 12:10:21.718635 kubelet[2607]: I0117 12:10:21.718539 2607 topology_manager.go:215] "Topology Admit Handler" podUID="c6c7969a-d094-4962-9f3d-83a3ce21e375" podNamespace="calico-apiserver" podName="calico-apiserver-c8975f968-wpfqg" Jan 17 12:10:21.720017 kubelet[2607]: I0117 12:10:21.719623 2607 topology_manager.go:215] "Topology Admit Handler" podUID="50e92775-825e-4d1d-9a42-956f2281a0b9" podNamespace="calico-apiserver" podName="calico-apiserver-c8975f968-4wqpn" Jan 17 12:10:21.720017 kubelet[2607]: I0117 12:10:21.719903 2607 topology_manager.go:215] "Topology Admit Handler" podUID="cf1e6aaa-53c2-4de6-a445-b92ba78d0548" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4grjz" Jan 17 12:10:21.725248 systemd[1]: Created slice kubepods-besteffort-pod09d6eb3a_b020_453e_a3b2_1c2857fad614.slice - libcontainer container kubepods-besteffort-pod09d6eb3a_b020_453e_a3b2_1c2857fad614.slice. Jan 17 12:10:21.731172 systemd[1]: Created slice kubepods-besteffort-podc6c7969a_d094_4962_9f3d_83a3ce21e375.slice - libcontainer container kubepods-besteffort-podc6c7969a_d094_4962_9f3d_83a3ce21e375.slice. Jan 17 12:10:21.736468 systemd[1]: Created slice kubepods-besteffort-pod50e92775_825e_4d1d_9a42_956f2281a0b9.slice - libcontainer container kubepods-besteffort-pod50e92775_825e_4d1d_9a42_956f2281a0b9.slice. Jan 17 12:10:21.740944 systemd[1]: Created slice kubepods-burstable-podcf1e6aaa_53c2_4de6_a445_b92ba78d0548.slice - libcontainer container kubepods-burstable-podcf1e6aaa_53c2_4de6_a445_b92ba78d0548.slice. Jan 17 12:10:21.863806 kubelet[2607]: I0117 12:10:21.863740 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzs9b\" (UniqueName: \"kubernetes.io/projected/c6c7969a-d094-4962-9f3d-83a3ce21e375-kube-api-access-dzs9b\") pod \"calico-apiserver-c8975f968-wpfqg\" (UID: \"c6c7969a-d094-4962-9f3d-83a3ce21e375\") " pod="calico-apiserver/calico-apiserver-c8975f968-wpfqg" Jan 17 12:10:21.863806 kubelet[2607]: I0117 12:10:21.863797 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d963b6e-e967-461b-88c1-043d231c7107-config-volume\") pod \"coredns-7db6d8ff4d-hmksg\" (UID: \"7d963b6e-e967-461b-88c1-043d231c7107\") " pod="kube-system/coredns-7db6d8ff4d-hmksg" Jan 17 12:10:21.863988 kubelet[2607]: I0117 12:10:21.863827 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c6c7969a-d094-4962-9f3d-83a3ce21e375-calico-apiserver-certs\") pod \"calico-apiserver-c8975f968-wpfqg\" (UID: \"c6c7969a-d094-4962-9f3d-83a3ce21e375\") " pod="calico-apiserver/calico-apiserver-c8975f968-wpfqg" Jan 17 12:10:21.863988 kubelet[2607]: I0117 12:10:21.863854 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf1e6aaa-53c2-4de6-a445-b92ba78d0548-config-volume\") pod \"coredns-7db6d8ff4d-4grjz\" (UID: \"cf1e6aaa-53c2-4de6-a445-b92ba78d0548\") " pod="kube-system/coredns-7db6d8ff4d-4grjz" Jan 17 12:10:21.863988 kubelet[2607]: I0117 12:10:21.863939 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/50e92775-825e-4d1d-9a42-956f2281a0b9-calico-apiserver-certs\") pod \"calico-apiserver-c8975f968-4wqpn\" (UID: \"50e92775-825e-4d1d-9a42-956f2281a0b9\") " pod="calico-apiserver/calico-apiserver-c8975f968-4wqpn" Jan 17 12:10:21.864082 kubelet[2607]: I0117 12:10:21.864012 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4dfb\" (UniqueName: \"kubernetes.io/projected/cf1e6aaa-53c2-4de6-a445-b92ba78d0548-kube-api-access-q4dfb\") pod \"coredns-7db6d8ff4d-4grjz\" (UID: \"cf1e6aaa-53c2-4de6-a445-b92ba78d0548\") " pod="kube-system/coredns-7db6d8ff4d-4grjz" Jan 17 12:10:21.864082 kubelet[2607]: I0117 12:10:21.864047 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09d6eb3a-b020-453e-a3b2-1c2857fad614-tigera-ca-bundle\") pod \"calico-kube-controllers-6d754b8cc8-79mq6\" (UID: \"09d6eb3a-b020-453e-a3b2-1c2857fad614\") " pod="calico-system/calico-kube-controllers-6d754b8cc8-79mq6" Jan 17 12:10:21.864135 kubelet[2607]: I0117 12:10:21.864081 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26b9d\" (UniqueName: \"kubernetes.io/projected/7d963b6e-e967-461b-88c1-043d231c7107-kube-api-access-26b9d\") pod \"coredns-7db6d8ff4d-hmksg\" (UID: \"7d963b6e-e967-461b-88c1-043d231c7107\") " pod="kube-system/coredns-7db6d8ff4d-hmksg" Jan 17 12:10:21.864135 kubelet[2607]: I0117 12:10:21.864106 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8ghb\" (UniqueName: \"kubernetes.io/projected/09d6eb3a-b020-453e-a3b2-1c2857fad614-kube-api-access-m8ghb\") pod \"calico-kube-controllers-6d754b8cc8-79mq6\" (UID: \"09d6eb3a-b020-453e-a3b2-1c2857fad614\") " pod="calico-system/calico-kube-controllers-6d754b8cc8-79mq6" Jan 17 12:10:21.864219 kubelet[2607]: I0117 12:10:21.864198 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2ffd\" (UniqueName: \"kubernetes.io/projected/50e92775-825e-4d1d-9a42-956f2281a0b9-kube-api-access-g2ffd\") pod \"calico-apiserver-c8975f968-4wqpn\" (UID: \"50e92775-825e-4d1d-9a42-956f2281a0b9\") " pod="calico-apiserver/calico-apiserver-c8975f968-4wqpn" Jan 17 12:10:21.990524 kubelet[2607]: E0117 12:10:21.990474 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:21.991093 containerd[1471]: time="2025-01-17T12:10:21.991046815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hmksg,Uid:7d963b6e-e967-461b-88c1-043d231c7107,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:22.029275 containerd[1471]: time="2025-01-17T12:10:22.029220951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d754b8cc8-79mq6,Uid:09d6eb3a-b020-453e-a3b2-1c2857fad614,Namespace:calico-system,Attempt:0,}" Jan 17 12:10:22.034912 containerd[1471]: time="2025-01-17T12:10:22.034862431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8975f968-wpfqg,Uid:c6c7969a-d094-4962-9f3d-83a3ce21e375,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:10:22.040157 containerd[1471]: time="2025-01-17T12:10:22.040113784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8975f968-4wqpn,Uid:50e92775-825e-4d1d-9a42-956f2281a0b9,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:10:22.043712 kubelet[2607]: E0117 12:10:22.043681 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:22.044194 containerd[1471]: time="2025-01-17T12:10:22.044027885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4grjz,Uid:cf1e6aaa-53c2-4de6-a445-b92ba78d0548,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:22.085955 containerd[1471]: time="2025-01-17T12:10:22.085845026Z" level=error msg="Failed to destroy network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.088015 containerd[1471]: time="2025-01-17T12:10:22.087782488Z" level=error msg="encountered an error cleaning up failed sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.088015 containerd[1471]: time="2025-01-17T12:10:22.087841974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hmksg,Uid:7d963b6e-e967-461b-88c1-043d231c7107,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.088034 systemd[1]: Started sshd@9-10.0.0.49:22-10.0.0.1:36036.service - OpenSSH per-connection server daemon (10.0.0.1:36036). Jan 17 12:10:22.088179 kubelet[2607]: E0117 12:10:22.088099 2607 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.088179 kubelet[2607]: E0117 12:10:22.088174 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hmksg" Jan 17 12:10:22.088254 kubelet[2607]: E0117 12:10:22.088197 2607 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hmksg" Jan 17 12:10:22.088282 kubelet[2607]: E0117 12:10:22.088242 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hmksg_kube-system(7d963b6e-e967-461b-88c1-043d231c7107)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hmksg_kube-system(7d963b6e-e967-461b-88c1-043d231c7107)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hmksg" podUID="7d963b6e-e967-461b-88c1-043d231c7107" Jan 17 12:10:22.129969 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 36036 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:22.133612 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:22.140857 systemd-logind[1458]: New session 10 of user core. Jan 17 12:10:22.145879 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:10:22.163203 containerd[1471]: time="2025-01-17T12:10:22.163148018Z" level=error msg="Failed to destroy network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.163571 containerd[1471]: time="2025-01-17T12:10:22.163546282Z" level=error msg="encountered an error cleaning up failed sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.163641 containerd[1471]: time="2025-01-17T12:10:22.163619336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8975f968-wpfqg,Uid:c6c7969a-d094-4962-9f3d-83a3ce21e375,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.163938 kubelet[2607]: E0117 12:10:22.163900 2607 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.164377 kubelet[2607]: E0117 12:10:22.164055 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8975f968-wpfqg" Jan 17 12:10:22.164377 kubelet[2607]: E0117 12:10:22.164082 2607 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8975f968-wpfqg" Jan 17 12:10:22.164377 kubelet[2607]: E0117 12:10:22.164129 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c8975f968-wpfqg_calico-apiserver(c6c7969a-d094-4962-9f3d-83a3ce21e375)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c8975f968-wpfqg_calico-apiserver(c6c7969a-d094-4962-9f3d-83a3ce21e375)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8975f968-wpfqg" podUID="c6c7969a-d094-4962-9f3d-83a3ce21e375" Jan 17 12:10:22.166367 containerd[1471]: time="2025-01-17T12:10:22.166323777Z" level=error msg="Failed to destroy network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.167015 containerd[1471]: time="2025-01-17T12:10:22.166838549Z" level=error msg="encountered an error cleaning up failed sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.167015 containerd[1471]: time="2025-01-17T12:10:22.166902495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d754b8cc8-79mq6,Uid:09d6eb3a-b020-453e-a3b2-1c2857fad614,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.167709 kubelet[2607]: E0117 12:10:22.167290 2607 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.167709 kubelet[2607]: E0117 12:10:22.167330 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d754b8cc8-79mq6" Jan 17 12:10:22.167709 kubelet[2607]: E0117 12:10:22.167347 2607 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d754b8cc8-79mq6" Jan 17 12:10:22.167819 kubelet[2607]: E0117 12:10:22.167377 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d754b8cc8-79mq6_calico-system(09d6eb3a-b020-453e-a3b2-1c2857fad614)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d754b8cc8-79mq6_calico-system(09d6eb3a-b020-453e-a3b2-1c2857fad614)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d754b8cc8-79mq6" podUID="09d6eb3a-b020-453e-a3b2-1c2857fad614" Jan 17 12:10:22.180695 containerd[1471]: time="2025-01-17T12:10:22.180624885Z" level=error msg="Failed to destroy network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.180981 containerd[1471]: time="2025-01-17T12:10:22.180920638Z" level=error msg="Failed to destroy network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.181045 containerd[1471]: time="2025-01-17T12:10:22.181018731Z" level=error msg="encountered an error cleaning up failed sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.181106 containerd[1471]: time="2025-01-17T12:10:22.181073428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8975f968-4wqpn,Uid:50e92775-825e-4d1d-9a42-956f2281a0b9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.181361 kubelet[2607]: E0117 12:10:22.181323 2607 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.181433 kubelet[2607]: E0117 12:10:22.181395 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8975f968-4wqpn" Jan 17 12:10:22.181471 kubelet[2607]: E0117 12:10:22.181416 2607 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8975f968-4wqpn" Jan 17 12:10:22.181498 containerd[1471]: time="2025-01-17T12:10:22.181413607Z" level=error msg="encountered an error cleaning up failed sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.181498 containerd[1471]: time="2025-01-17T12:10:22.181470590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4grjz,Uid:cf1e6aaa-53c2-4de6-a445-b92ba78d0548,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.181553 kubelet[2607]: E0117 12:10:22.181480 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c8975f968-4wqpn_calico-apiserver(50e92775-825e-4d1d-9a42-956f2281a0b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c8975f968-4wqpn_calico-apiserver(50e92775-825e-4d1d-9a42-956f2281a0b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8975f968-4wqpn" podUID="50e92775-825e-4d1d-9a42-956f2281a0b9" Jan 17 12:10:22.181698 kubelet[2607]: E0117 12:10:22.181658 2607 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.181726 kubelet[2607]: E0117 12:10:22.181703 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4grjz" Jan 17 12:10:22.181726 kubelet[2607]: E0117 12:10:22.181719 2607 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4grjz" Jan 17 12:10:22.181769 kubelet[2607]: E0117 12:10:22.181747 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4grjz_kube-system(cf1e6aaa-53c2-4de6-a445-b92ba78d0548)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4grjz_kube-system(cf1e6aaa-53c2-4de6-a445-b92ba78d0548)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4grjz" podUID="cf1e6aaa-53c2-4de6-a445-b92ba78d0548" Jan 17 12:10:22.253651 sshd[3480]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:22.257629 systemd[1]: sshd@9-10.0.0.49:22-10.0.0.1:36036.service: Deactivated successfully. Jan 17 12:10:22.259744 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:10:22.260462 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:10:22.261465 systemd-logind[1458]: Removed session 10. Jan 17 12:10:22.413162 kubelet[2607]: E0117 12:10:22.412987 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:22.413842 kubelet[2607]: I0117 12:10:22.413805 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:10:22.414175 containerd[1471]: time="2025-01-17T12:10:22.414126347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:10:22.414747 containerd[1471]: time="2025-01-17T12:10:22.414703232Z" level=info msg="StopPodSandbox for \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\"" Jan 17 12:10:22.416188 containerd[1471]: time="2025-01-17T12:10:22.414911782Z" level=info msg="Ensure that sandbox 469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd in task-service has been cleanup successfully" Jan 17 12:10:22.416188 containerd[1471]: time="2025-01-17T12:10:22.416110962Z" level=info msg="StopPodSandbox for \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\"" Jan 17 12:10:22.416267 kubelet[2607]: I0117 12:10:22.415454 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:10:22.416326 containerd[1471]: time="2025-01-17T12:10:22.416265135Z" level=info msg="Ensure that sandbox d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965 in task-service has been cleanup successfully" Jan 17 12:10:22.417782 kubelet[2607]: I0117 12:10:22.417324 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:10:22.417895 containerd[1471]: time="2025-01-17T12:10:22.417857840Z" level=info msg="StopPodSandbox for \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\"" Jan 17 12:10:22.418055 containerd[1471]: time="2025-01-17T12:10:22.418033054Z" level=info msg="Ensure that sandbox 5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708 in task-service has been cleanup successfully" Jan 17 12:10:22.426159 kubelet[2607]: I0117 12:10:22.423998 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:10:22.426285 containerd[1471]: time="2025-01-17T12:10:22.424995072Z" level=info msg="StopPodSandbox for \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\"" Jan 17 12:10:22.426285 containerd[1471]: time="2025-01-17T12:10:22.425600103Z" level=info msg="Ensure that sandbox fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e in task-service has been cleanup successfully" Jan 17 12:10:22.426949 kubelet[2607]: I0117 12:10:22.426875 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:10:22.427966 containerd[1471]: time="2025-01-17T12:10:22.427933484Z" level=info msg="StopPodSandbox for \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\"" Jan 17 12:10:22.431324 containerd[1471]: time="2025-01-17T12:10:22.431021410Z" level=info msg="Ensure that sandbox 6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce in task-service has been cleanup successfully" Jan 17 12:10:22.468255 containerd[1471]: time="2025-01-17T12:10:22.468198511Z" level=error msg="StopPodSandbox for \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\" failed" error="failed to destroy network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.468808 kubelet[2607]: E0117 12:10:22.468759 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:10:22.468890 kubelet[2607]: E0117 12:10:22.468831 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708"} Jan 17 12:10:22.468938 kubelet[2607]: E0117 12:10:22.468915 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d963b6e-e967-461b-88c1-043d231c7107\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:22.469020 kubelet[2607]: E0117 12:10:22.468951 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d963b6e-e967-461b-88c1-043d231c7107\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hmksg" podUID="7d963b6e-e967-461b-88c1-043d231c7107" Jan 17 12:10:22.473082 containerd[1471]: time="2025-01-17T12:10:22.473044696Z" level=error msg="StopPodSandbox for \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\" failed" error="failed to destroy network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.473301 kubelet[2607]: E0117 12:10:22.473261 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:10:22.473362 kubelet[2607]: E0117 12:10:22.473317 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e"} Jan 17 12:10:22.473362 kubelet[2607]: E0117 12:10:22.473345 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf1e6aaa-53c2-4de6-a445-b92ba78d0548\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:22.473468 kubelet[2607]: E0117 12:10:22.473369 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf1e6aaa-53c2-4de6-a445-b92ba78d0548\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4grjz" podUID="cf1e6aaa-53c2-4de6-a445-b92ba78d0548" Jan 17 12:10:22.475942 containerd[1471]: time="2025-01-17T12:10:22.475869915Z" level=error msg="StopPodSandbox for \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\" failed" error="failed to destroy network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.476137 kubelet[2607]: E0117 12:10:22.476105 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:10:22.476186 kubelet[2607]: E0117 12:10:22.476140 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965"} Jan 17 12:10:22.476186 kubelet[2607]: E0117 12:10:22.476170 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09d6eb3a-b020-453e-a3b2-1c2857fad614\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:22.476277 kubelet[2607]: E0117 12:10:22.476195 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09d6eb3a-b020-453e-a3b2-1c2857fad614\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d754b8cc8-79mq6" podUID="09d6eb3a-b020-453e-a3b2-1c2857fad614" Jan 17 12:10:22.477192 containerd[1471]: time="2025-01-17T12:10:22.477158470Z" level=error msg="StopPodSandbox for \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\" failed" error="failed to destroy network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.477456 kubelet[2607]: E0117 12:10:22.477389 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:10:22.477504 kubelet[2607]: E0117 12:10:22.477464 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd"} Jan 17 12:10:22.477539 kubelet[2607]: E0117 12:10:22.477508 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6c7969a-d094-4962-9f3d-83a3ce21e375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:22.477614 kubelet[2607]: E0117 12:10:22.477533 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6c7969a-d094-4962-9f3d-83a3ce21e375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8975f968-wpfqg" podUID="c6c7969a-d094-4962-9f3d-83a3ce21e375" Jan 17 12:10:22.489511 containerd[1471]: time="2025-01-17T12:10:22.489449504Z" level=error msg="StopPodSandbox for \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\" failed" error="failed to destroy network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:22.489769 kubelet[2607]: E0117 12:10:22.489721 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:10:22.489824 kubelet[2607]: E0117 12:10:22.489780 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce"} Jan 17 12:10:22.489855 kubelet[2607]: E0117 12:10:22.489835 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50e92775-825e-4d1d-9a42-956f2281a0b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:22.489929 kubelet[2607]: E0117 12:10:22.489867 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50e92775-825e-4d1d-9a42-956f2281a0b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8975f968-4wqpn" podUID="50e92775-825e-4d1d-9a42-956f2281a0b9" Jan 17 12:10:23.321375 systemd[1]: Created slice kubepods-besteffort-pod83490bae_2f03_49cc_b16c_ff7f265ed80b.slice - libcontainer container kubepods-besteffort-pod83490bae_2f03_49cc_b16c_ff7f265ed80b.slice. Jan 17 12:10:23.324446 containerd[1471]: time="2025-01-17T12:10:23.324389989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xfsj8,Uid:83490bae-2f03-49cc-b16c-ff7f265ed80b,Namespace:calico-system,Attempt:0,}" Jan 17 12:10:23.446260 containerd[1471]: time="2025-01-17T12:10:23.446171191Z" level=error msg="Failed to destroy network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:23.446804 containerd[1471]: time="2025-01-17T12:10:23.446758445Z" level=error msg="encountered an error cleaning up failed sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:23.446871 containerd[1471]: time="2025-01-17T12:10:23.446839725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xfsj8,Uid:83490bae-2f03-49cc-b16c-ff7f265ed80b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:23.447190 kubelet[2607]: E0117 12:10:23.447136 2607 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:23.447736 kubelet[2607]: E0117 12:10:23.447222 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xfsj8" Jan 17 12:10:23.447736 kubelet[2607]: E0117 12:10:23.447254 2607 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xfsj8" Jan 17 12:10:23.447736 kubelet[2607]: E0117 12:10:23.447325 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xfsj8_calico-system(83490bae-2f03-49cc-b16c-ff7f265ed80b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xfsj8_calico-system(83490bae-2f03-49cc-b16c-ff7f265ed80b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:23.449300 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce-shm.mount: Deactivated successfully. Jan 17 12:10:24.432397 kubelet[2607]: I0117 12:10:24.432355 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:10:24.433007 containerd[1471]: time="2025-01-17T12:10:24.432962263Z" level=info msg="StopPodSandbox for \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\"" Jan 17 12:10:24.433407 containerd[1471]: time="2025-01-17T12:10:24.433141725Z" level=info msg="Ensure that sandbox da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce in task-service has been cleanup successfully" Jan 17 12:10:24.469501 containerd[1471]: time="2025-01-17T12:10:24.469422237Z" level=error msg="StopPodSandbox for \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\" failed" error="failed to destroy network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:24.469841 kubelet[2607]: E0117 12:10:24.469793 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:10:24.470233 kubelet[2607]: E0117 12:10:24.469859 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce"} Jan 17 12:10:24.470233 kubelet[2607]: E0117 12:10:24.469901 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83490bae-2f03-49cc-b16c-ff7f265ed80b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:24.470233 kubelet[2607]: E0117 12:10:24.469934 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83490bae-2f03-49cc-b16c-ff7f265ed80b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:27.265966 systemd[1]: Started sshd@10-10.0.0.49:22-10.0.0.1:36046.service - OpenSSH per-connection server daemon (10.0.0.1:36046). Jan 17 12:10:27.306885 sshd[3809]: Accepted publickey for core from 10.0.0.1 port 36046 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:27.310256 sshd[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:27.314911 systemd-logind[1458]: New session 11 of user core. Jan 17 12:10:27.321721 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:10:27.454043 sshd[3809]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:27.462366 systemd[1]: sshd@10-10.0.0.49:22-10.0.0.1:36046.service: Deactivated successfully. Jan 17 12:10:27.465008 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:10:27.466449 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:10:27.475963 systemd[1]: Started sshd@11-10.0.0.49:22-10.0.0.1:60324.service - OpenSSH per-connection server daemon (10.0.0.1:60324). Jan 17 12:10:27.477315 systemd-logind[1458]: Removed session 11. Jan 17 12:10:27.516112 sshd[3825]: Accepted publickey for core from 10.0.0.1 port 60324 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:27.517244 sshd[3825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:27.524717 systemd-logind[1458]: New session 12 of user core. Jan 17 12:10:27.531810 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:10:27.742162 sshd[3825]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:27.753278 systemd[1]: sshd@11-10.0.0.49:22-10.0.0.1:60324.service: Deactivated successfully. Jan 17 12:10:27.755194 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:10:27.757699 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:10:27.766922 systemd[1]: Started sshd@12-10.0.0.49:22-10.0.0.1:60338.service - OpenSSH per-connection server daemon (10.0.0.1:60338). Jan 17 12:10:27.769954 systemd-logind[1458]: Removed session 12. Jan 17 12:10:27.809564 sshd[3837]: Accepted publickey for core from 10.0.0.1 port 60338 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:27.811400 sshd[3837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:27.818190 systemd-logind[1458]: New session 13 of user core. Jan 17 12:10:27.825814 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:10:27.969125 sshd[3837]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:27.974484 systemd[1]: sshd@12-10.0.0.49:22-10.0.0.1:60338.service: Deactivated successfully. Jan 17 12:10:27.977897 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:10:27.979841 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:10:27.981132 systemd-logind[1458]: Removed session 13. Jan 17 12:10:29.471909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2722620411.mount: Deactivated successfully. Jan 17 12:10:30.751231 containerd[1471]: time="2025-01-17T12:10:30.751143778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:30.826971 containerd[1471]: time="2025-01-17T12:10:30.826885164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:10:30.873281 containerd[1471]: time="2025-01-17T12:10:30.873216366Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:30.883658 containerd[1471]: time="2025-01-17T12:10:30.883580477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:30.884114 containerd[1471]: time="2025-01-17T12:10:30.884067826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.469884557s" Jan 17 12:10:30.884169 containerd[1471]: time="2025-01-17T12:10:30.884110781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:10:30.892437 containerd[1471]: time="2025-01-17T12:10:30.892376340Z" level=info msg="CreateContainer within sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:10:31.027492 containerd[1471]: time="2025-01-17T12:10:31.027342141Z" level=info msg="CreateContainer within sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"49c7f03b0a9ad98645986e586f21f5ef967d2c6a89d13580e35ba65c36dfba18\"" Jan 17 12:10:31.028226 containerd[1471]: time="2025-01-17T12:10:31.028153542Z" level=info msg="StartContainer for \"49c7f03b0a9ad98645986e586f21f5ef967d2c6a89d13580e35ba65c36dfba18\"" Jan 17 12:10:31.112913 systemd[1]: Started cri-containerd-49c7f03b0a9ad98645986e586f21f5ef967d2c6a89d13580e35ba65c36dfba18.scope - libcontainer container 49c7f03b0a9ad98645986e586f21f5ef967d2c6a89d13580e35ba65c36dfba18. Jan 17 12:10:31.154532 containerd[1471]: time="2025-01-17T12:10:31.154476278Z" level=info msg="StartContainer for \"49c7f03b0a9ad98645986e586f21f5ef967d2c6a89d13580e35ba65c36dfba18\" returns successfully" Jan 17 12:10:31.219584 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:10:31.219713 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:10:31.241523 systemd[1]: cri-containerd-49c7f03b0a9ad98645986e586f21f5ef967d2c6a89d13580e35ba65c36dfba18.scope: Deactivated successfully. Jan 17 12:10:31.266470 containerd[1471]: time="2025-01-17T12:10:31.266389471Z" level=info msg="shim disconnected" id=49c7f03b0a9ad98645986e586f21f5ef967d2c6a89d13580e35ba65c36dfba18 namespace=k8s.io Jan 17 12:10:31.266470 containerd[1471]: time="2025-01-17T12:10:31.266451052Z" level=warning msg="cleaning up after shim disconnected" id=49c7f03b0a9ad98645986e586f21f5ef967d2c6a89d13580e35ba65c36dfba18 namespace=k8s.io Jan 17 12:10:31.266470 containerd[1471]: time="2025-01-17T12:10:31.266459938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:10:31.451789 kubelet[2607]: I0117 12:10:31.451761 2607 scope.go:117] "RemoveContainer" containerID="49c7f03b0a9ad98645986e586f21f5ef967d2c6a89d13580e35ba65c36dfba18" Jan 17 12:10:31.452249 kubelet[2607]: E0117 12:10:31.451839 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:31.454560 containerd[1471]: time="2025-01-17T12:10:31.454421241Z" level=info msg="CreateContainer within sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Jan 17 12:10:31.471943 containerd[1471]: time="2025-01-17T12:10:31.471869034Z" level=info msg="CreateContainer within sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"83aed4c8c2d802f4043d2b6c6a3b47ffc70af833f1127eec9104ff77c4c1028c\"" Jan 17 12:10:31.472436 containerd[1471]: time="2025-01-17T12:10:31.472379409Z" level=info msg="StartContainer for \"83aed4c8c2d802f4043d2b6c6a3b47ffc70af833f1127eec9104ff77c4c1028c\"" Jan 17 12:10:31.501735 systemd[1]: Started cri-containerd-83aed4c8c2d802f4043d2b6c6a3b47ffc70af833f1127eec9104ff77c4c1028c.scope - libcontainer container 83aed4c8c2d802f4043d2b6c6a3b47ffc70af833f1127eec9104ff77c4c1028c. Jan 17 12:10:31.534336 containerd[1471]: time="2025-01-17T12:10:31.534271238Z" level=info msg="StartContainer for \"83aed4c8c2d802f4043d2b6c6a3b47ffc70af833f1127eec9104ff77c4c1028c\" returns successfully" Jan 17 12:10:31.588847 systemd[1]: cri-containerd-83aed4c8c2d802f4043d2b6c6a3b47ffc70af833f1127eec9104ff77c4c1028c.scope: Deactivated successfully. Jan 17 12:10:31.615021 containerd[1471]: time="2025-01-17T12:10:31.614925101Z" level=info msg="shim disconnected" id=83aed4c8c2d802f4043d2b6c6a3b47ffc70af833f1127eec9104ff77c4c1028c namespace=k8s.io Jan 17 12:10:31.615021 containerd[1471]: time="2025-01-17T12:10:31.615007041Z" level=warning msg="cleaning up after shim disconnected" id=83aed4c8c2d802f4043d2b6c6a3b47ffc70af833f1127eec9104ff77c4c1028c namespace=k8s.io Jan 17 12:10:31.615021 containerd[1471]: time="2025-01-17T12:10:31.615019504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:10:31.891004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49c7f03b0a9ad98645986e586f21f5ef967d2c6a89d13580e35ba65c36dfba18-rootfs.mount: Deactivated successfully. Jan 17 12:10:32.455276 kubelet[2607]: I0117 12:10:32.455245 2607 scope.go:117] "RemoveContainer" containerID="49c7f03b0a9ad98645986e586f21f5ef967d2c6a89d13580e35ba65c36dfba18" Jan 17 12:10:32.455774 kubelet[2607]: I0117 12:10:32.455628 2607 scope.go:117] "RemoveContainer" containerID="83aed4c8c2d802f4043d2b6c6a3b47ffc70af833f1127eec9104ff77c4c1028c" Jan 17 12:10:32.455774 kubelet[2607]: E0117 12:10:32.455683 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:32.457151 kubelet[2607]: E0117 12:10:32.456876 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-hm29z_calico-system(20b3aef6-8302-4600-bbe2-09c056e53e6a)\"" pod="calico-system/calico-node-hm29z" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" Jan 17 12:10:32.494347 containerd[1471]: time="2025-01-17T12:10:32.494290198Z" level=info msg="RemoveContainer for \"49c7f03b0a9ad98645986e586f21f5ef967d2c6a89d13580e35ba65c36dfba18\"" Jan 17 12:10:32.499174 containerd[1471]: time="2025-01-17T12:10:32.499132734Z" level=info msg="RemoveContainer for \"49c7f03b0a9ad98645986e586f21f5ef967d2c6a89d13580e35ba65c36dfba18\" returns successfully" Jan 17 12:10:32.982808 systemd[1]: Started sshd@13-10.0.0.49:22-10.0.0.1:60344.service - OpenSSH per-connection server daemon (10.0.0.1:60344). Jan 17 12:10:33.041768 sshd[3991]: Accepted publickey for core from 10.0.0.1 port 60344 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:33.043982 sshd[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:33.049160 systemd-logind[1458]: New session 14 of user core. Jan 17 12:10:33.063730 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:10:33.198416 sshd[3991]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:33.202387 systemd[1]: sshd@13-10.0.0.49:22-10.0.0.1:60344.service: Deactivated successfully. Jan 17 12:10:33.204438 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:10:33.205122 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:10:33.206001 systemd-logind[1458]: Removed session 14. Jan 17 12:10:33.413407 kubelet[2607]: I0117 12:10:33.413274 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:10:33.414034 kubelet[2607]: E0117 12:10:33.414010 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:33.459922 kubelet[2607]: E0117 12:10:33.459886 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:34.315669 containerd[1471]: time="2025-01-17T12:10:34.315477669Z" level=info msg="StopPodSandbox for \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\"" Jan 17 12:10:34.316316 containerd[1471]: time="2025-01-17T12:10:34.315719820Z" level=info msg="StopPodSandbox for \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\"" Jan 17 12:10:34.316619 containerd[1471]: time="2025-01-17T12:10:34.316432404Z" level=info msg="StopPodSandbox for \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\"" Jan 17 12:10:34.316619 containerd[1471]: time="2025-01-17T12:10:34.316515565Z" level=info msg="StopPodSandbox for \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\"" Jan 17 12:10:34.366445 containerd[1471]: time="2025-01-17T12:10:34.366213514Z" level=error msg="StopPodSandbox for \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\" failed" error="failed to destroy network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:34.366838 kubelet[2607]: E0117 12:10:34.366513 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:10:34.366838 kubelet[2607]: E0117 12:10:34.366580 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708"} Jan 17 12:10:34.366838 kubelet[2607]: E0117 12:10:34.366645 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d963b6e-e967-461b-88c1-043d231c7107\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:34.366838 kubelet[2607]: E0117 12:10:34.366678 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d963b6e-e967-461b-88c1-043d231c7107\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hmksg" podUID="7d963b6e-e967-461b-88c1-043d231c7107" Jan 17 12:10:34.371925 containerd[1471]: time="2025-01-17T12:10:34.371866568Z" level=error msg="StopPodSandbox for \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\" failed" error="failed to destroy network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:34.372182 kubelet[2607]: E0117 12:10:34.372107 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:10:34.372268 kubelet[2607]: E0117 12:10:34.372195 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd"} Jan 17 12:10:34.372268 kubelet[2607]: E0117 12:10:34.372233 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6c7969a-d094-4962-9f3d-83a3ce21e375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:34.372268 kubelet[2607]: E0117 12:10:34.372263 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6c7969a-d094-4962-9f3d-83a3ce21e375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8975f968-wpfqg" podUID="c6c7969a-d094-4962-9f3d-83a3ce21e375" Jan 17 12:10:34.372450 containerd[1471]: time="2025-01-17T12:10:34.372415615Z" level=error msg="StopPodSandbox for \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\" failed" error="failed to destroy network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:34.372560 kubelet[2607]: E0117 12:10:34.372533 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:10:34.372560 kubelet[2607]: E0117 12:10:34.372564 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e"} Jan 17 12:10:34.372676 kubelet[2607]: E0117 12:10:34.372601 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf1e6aaa-53c2-4de6-a445-b92ba78d0548\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:34.372676 kubelet[2607]: E0117 12:10:34.372628 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf1e6aaa-53c2-4de6-a445-b92ba78d0548\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4grjz" podUID="cf1e6aaa-53c2-4de6-a445-b92ba78d0548" Jan 17 12:10:34.372831 containerd[1471]: time="2025-01-17T12:10:34.372801174Z" level=error msg="StopPodSandbox for \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\" failed" error="failed to destroy network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:34.372959 kubelet[2607]: E0117 12:10:34.372920 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:10:34.373003 kubelet[2607]: E0117 12:10:34.372955 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce"} Jan 17 12:10:34.373003 kubelet[2607]: E0117 12:10:34.372985 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50e92775-825e-4d1d-9a42-956f2281a0b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:34.373103 kubelet[2607]: E0117 12:10:34.373011 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50e92775-825e-4d1d-9a42-956f2281a0b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8975f968-4wqpn" podUID="50e92775-825e-4d1d-9a42-956f2281a0b9" Jan 17 12:10:36.314861 containerd[1471]: time="2025-01-17T12:10:36.314813574Z" level=info msg="StopPodSandbox for \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\"" Jan 17 12:10:36.341512 containerd[1471]: time="2025-01-17T12:10:36.341464536Z" level=error msg="StopPodSandbox for \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\" failed" error="failed to destroy network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:36.341744 kubelet[2607]: E0117 12:10:36.341696 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:10:36.342043 kubelet[2607]: E0117 12:10:36.341759 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965"} Jan 17 12:10:36.342043 kubelet[2607]: E0117 12:10:36.341792 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09d6eb3a-b020-453e-a3b2-1c2857fad614\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:36.342043 kubelet[2607]: E0117 12:10:36.341815 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09d6eb3a-b020-453e-a3b2-1c2857fad614\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d754b8cc8-79mq6" podUID="09d6eb3a-b020-453e-a3b2-1c2857fad614" Jan 17 12:10:38.210283 systemd[1]: Started sshd@14-10.0.0.49:22-10.0.0.1:46646.service - OpenSSH per-connection server daemon (10.0.0.1:46646). Jan 17 12:10:38.247069 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 46646 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:38.248456 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:38.252334 systemd-logind[1458]: New session 15 of user core. Jan 17 12:10:38.262722 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:10:38.370956 sshd[4123]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:38.375275 systemd[1]: sshd@14-10.0.0.49:22-10.0.0.1:46646.service: Deactivated successfully. Jan 17 12:10:38.377364 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:10:38.378121 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:10:38.379017 systemd-logind[1458]: Removed session 15. Jan 17 12:10:40.315759 containerd[1471]: time="2025-01-17T12:10:40.315297094Z" level=info msg="StopPodSandbox for \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\"" Jan 17 12:10:40.342391 containerd[1471]: time="2025-01-17T12:10:40.342331138Z" level=error msg="StopPodSandbox for \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\" failed" error="failed to destroy network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:40.342640 kubelet[2607]: E0117 12:10:40.342559 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:10:40.342640 kubelet[2607]: E0117 12:10:40.342640 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce"} Jan 17 12:10:40.343068 kubelet[2607]: E0117 12:10:40.342683 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83490bae-2f03-49cc-b16c-ff7f265ed80b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:40.343068 kubelet[2607]: E0117 12:10:40.342714 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83490bae-2f03-49cc-b16c-ff7f265ed80b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:43.315246 kubelet[2607]: I0117 12:10:43.315200 2607 scope.go:117] "RemoveContainer" containerID="83aed4c8c2d802f4043d2b6c6a3b47ffc70af833f1127eec9104ff77c4c1028c" Jan 17 12:10:43.315795 kubelet[2607]: E0117 12:10:43.315311 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:43.318074 containerd[1471]: time="2025-01-17T12:10:43.318036312Z" level=info msg="CreateContainer within sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Jan 17 12:10:43.336492 containerd[1471]: time="2025-01-17T12:10:43.336430577Z" level=info msg="CreateContainer within sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361\"" Jan 17 12:10:43.337052 containerd[1471]: time="2025-01-17T12:10:43.337026997Z" level=info msg="StartContainer for \"acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361\"" Jan 17 12:10:43.370790 systemd[1]: Started cri-containerd-acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361.scope - libcontainer container acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361. Jan 17 12:10:43.378397 systemd[1]: Started sshd@15-10.0.0.49:22-10.0.0.1:46662.service - OpenSSH per-connection server daemon (10.0.0.1:46662). Jan 17 12:10:43.411540 containerd[1471]: time="2025-01-17T12:10:43.411485618Z" level=info msg="StartContainer for \"acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361\" returns successfully" Jan 17 12:10:43.416690 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 46662 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:43.418735 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:43.424498 systemd-logind[1458]: New session 16 of user core. Jan 17 12:10:43.430875 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:10:43.468333 systemd[1]: cri-containerd-acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361.scope: Deactivated successfully. Jan 17 12:10:43.486654 kubelet[2607]: E0117 12:10:43.486465 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:43.502974 containerd[1471]: time="2025-01-17T12:10:43.502863797Z" level=info msg="shim disconnected" id=acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361 namespace=k8s.io Jan 17 12:10:43.502974 containerd[1471]: time="2025-01-17T12:10:43.502958510Z" level=warning msg="cleaning up after shim disconnected" id=acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361 namespace=k8s.io Jan 17 12:10:43.502974 containerd[1471]: time="2025-01-17T12:10:43.502972868Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:10:43.590345 containerd[1471]: time="2025-01-17T12:10:43.582550456Z" level=error msg="ExecSync for \"acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361 not found: not found" Jan 17 12:10:43.591187 kubelet[2607]: E0117 12:10:43.590422 2607 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361 not found: not found" containerID="acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 17 12:10:43.591454 containerd[1471]: time="2025-01-17T12:10:43.591415342Z" level=error msg="ExecSync for \"acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" Jan 17 12:10:43.591564 kubelet[2607]: E0117 12:10:43.591533 2607 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 17 12:10:43.591765 containerd[1471]: time="2025-01-17T12:10:43.591734397Z" level=error msg="ExecSync for \"acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" Jan 17 12:10:43.592151 kubelet[2607]: E0117 12:10:43.591848 2607 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 17 12:10:43.606070 sshd[4186]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:43.610682 systemd[1]: sshd@15-10.0.0.49:22-10.0.0.1:46662.service: Deactivated successfully. Jan 17 12:10:43.612496 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:10:43.613097 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:10:43.614168 systemd-logind[1458]: Removed session 16. Jan 17 12:10:44.329707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361-rootfs.mount: Deactivated successfully. Jan 17 12:10:44.489863 kubelet[2607]: I0117 12:10:44.489823 2607 scope.go:117] "RemoveContainer" containerID="83aed4c8c2d802f4043d2b6c6a3b47ffc70af833f1127eec9104ff77c4c1028c" Jan 17 12:10:44.490323 kubelet[2607]: I0117 12:10:44.490177 2607 scope.go:117] "RemoveContainer" containerID="acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361" Jan 17 12:10:44.490323 kubelet[2607]: E0117 12:10:44.490253 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:44.490760 kubelet[2607]: E0117 12:10:44.490732 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-hm29z_calico-system(20b3aef6-8302-4600-bbe2-09c056e53e6a)\"" pod="calico-system/calico-node-hm29z" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" Jan 17 12:10:44.491210 containerd[1471]: time="2025-01-17T12:10:44.491164406Z" level=info msg="RemoveContainer for \"83aed4c8c2d802f4043d2b6c6a3b47ffc70af833f1127eec9104ff77c4c1028c\"" Jan 17 12:10:44.499825 containerd[1471]: time="2025-01-17T12:10:44.499765482Z" level=info msg="RemoveContainer for \"83aed4c8c2d802f4043d2b6c6a3b47ffc70af833f1127eec9104ff77c4c1028c\" returns successfully" Jan 17 12:10:45.494854 kubelet[2607]: I0117 12:10:45.494802 2607 scope.go:117] "RemoveContainer" containerID="acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361" Jan 17 12:10:45.495370 kubelet[2607]: E0117 12:10:45.494884 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:45.495370 kubelet[2607]: E0117 12:10:45.495305 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-hm29z_calico-system(20b3aef6-8302-4600-bbe2-09c056e53e6a)\"" pod="calico-system/calico-node-hm29z" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" Jan 17 12:10:46.315023 containerd[1471]: time="2025-01-17T12:10:46.314694994Z" level=info msg="StopPodSandbox for \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\"" Jan 17 12:10:46.348801 containerd[1471]: time="2025-01-17T12:10:46.348747029Z" level=error msg="StopPodSandbox for \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\" failed" error="failed to destroy network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:46.349083 kubelet[2607]: E0117 12:10:46.349024 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:10:46.349144 kubelet[2607]: E0117 12:10:46.349081 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd"} Jan 17 12:10:46.349144 kubelet[2607]: E0117 12:10:46.349115 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6c7969a-d094-4962-9f3d-83a3ce21e375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:46.349228 kubelet[2607]: E0117 12:10:46.349139 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6c7969a-d094-4962-9f3d-83a3ce21e375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8975f968-wpfqg" podUID="c6c7969a-d094-4962-9f3d-83a3ce21e375" Jan 17 12:10:47.315299 containerd[1471]: time="2025-01-17T12:10:47.315246039Z" level=info msg="StopPodSandbox for \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\"" Jan 17 12:10:47.356170 containerd[1471]: time="2025-01-17T12:10:47.346069389Z" level=error msg="StopPodSandbox for \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\" failed" error="failed to destroy network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:47.356414 kubelet[2607]: E0117 12:10:47.356360 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:10:47.356837 kubelet[2607]: E0117 12:10:47.356427 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e"} Jan 17 12:10:47.356837 kubelet[2607]: E0117 12:10:47.356488 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf1e6aaa-53c2-4de6-a445-b92ba78d0548\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:47.356837 kubelet[2607]: E0117 12:10:47.356520 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf1e6aaa-53c2-4de6-a445-b92ba78d0548\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4grjz" podUID="cf1e6aaa-53c2-4de6-a445-b92ba78d0548" Jan 17 12:10:48.315414 containerd[1471]: time="2025-01-17T12:10:48.315293646Z" level=info msg="StopPodSandbox for \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\"" Jan 17 12:10:48.342953 containerd[1471]: time="2025-01-17T12:10:48.342895567Z" level=error msg="StopPodSandbox for \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\" failed" error="failed to destroy network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:48.353024 kubelet[2607]: E0117 12:10:48.352984 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:10:48.353141 kubelet[2607]: E0117 12:10:48.353032 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce"} Jan 17 12:10:48.353141 kubelet[2607]: E0117 12:10:48.353064 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50e92775-825e-4d1d-9a42-956f2281a0b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:48.353141 kubelet[2607]: E0117 12:10:48.353085 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50e92775-825e-4d1d-9a42-956f2281a0b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8975f968-4wqpn" podUID="50e92775-825e-4d1d-9a42-956f2281a0b9" Jan 17 12:10:48.617217 systemd[1]: Started sshd@16-10.0.0.49:22-10.0.0.1:52768.service - OpenSSH per-connection server daemon (10.0.0.1:52768). Jan 17 12:10:48.705826 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 52768 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:48.707293 sshd[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:48.711094 systemd-logind[1458]: New session 17 of user core. Jan 17 12:10:48.720723 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:10:48.827834 sshd[4309]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:48.831701 systemd[1]: sshd@16-10.0.0.49:22-10.0.0.1:52768.service: Deactivated successfully. Jan 17 12:10:48.833833 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:10:48.834475 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:10:48.835351 systemd-logind[1458]: Removed session 17. Jan 17 12:10:49.315050 containerd[1471]: time="2025-01-17T12:10:49.314990670Z" level=info msg="StopPodSandbox for \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\"" Jan 17 12:10:49.315247 containerd[1471]: time="2025-01-17T12:10:49.314992844Z" level=info msg="StopPodSandbox for \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\"" Jan 17 12:10:49.343763 containerd[1471]: time="2025-01-17T12:10:49.343701322Z" level=error msg="StopPodSandbox for \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\" failed" error="failed to destroy network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:49.344150 containerd[1471]: time="2025-01-17T12:10:49.344094799Z" level=error msg="StopPodSandbox for \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\" failed" error="failed to destroy network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:49.344183 kubelet[2607]: E0117 12:10:49.343942 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:10:49.344183 kubelet[2607]: E0117 12:10:49.343989 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965"} Jan 17 12:10:49.344183 kubelet[2607]: E0117 12:10:49.344028 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09d6eb3a-b020-453e-a3b2-1c2857fad614\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:49.344183 kubelet[2607]: E0117 12:10:49.344058 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09d6eb3a-b020-453e-a3b2-1c2857fad614\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d754b8cc8-79mq6" podUID="09d6eb3a-b020-453e-a3b2-1c2857fad614" Jan 17 12:10:49.344692 kubelet[2607]: E0117 12:10:49.344221 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:10:49.344692 kubelet[2607]: E0117 12:10:49.344249 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708"} Jan 17 12:10:49.344692 kubelet[2607]: E0117 12:10:49.344314 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d963b6e-e967-461b-88c1-043d231c7107\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:49.344692 kubelet[2607]: E0117 12:10:49.344337 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d963b6e-e967-461b-88c1-043d231c7107\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hmksg" podUID="7d963b6e-e967-461b-88c1-043d231c7107" Jan 17 12:10:53.314789 containerd[1471]: time="2025-01-17T12:10:53.314720315Z" level=info msg="StopPodSandbox for \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\"" Jan 17 12:10:53.346205 containerd[1471]: time="2025-01-17T12:10:53.346141556Z" level=error msg="StopPodSandbox for \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\" failed" error="failed to destroy network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:10:53.346463 kubelet[2607]: E0117 12:10:53.346409 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:10:53.346814 kubelet[2607]: E0117 12:10:53.346474 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce"} Jan 17 12:10:53.346814 kubelet[2607]: E0117 12:10:53.346509 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83490bae-2f03-49cc-b16c-ff7f265ed80b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:10:53.346814 kubelet[2607]: E0117 12:10:53.346544 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83490bae-2f03-49cc-b16c-ff7f265ed80b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xfsj8" podUID="83490bae-2f03-49cc-b16c-ff7f265ed80b" Jan 17 12:10:53.838725 systemd[1]: Started sshd@17-10.0.0.49:22-10.0.0.1:52776.service - OpenSSH per-connection server daemon (10.0.0.1:52776). Jan 17 12:10:53.881208 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 52776 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:53.882756 sshd[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:53.886924 systemd-logind[1458]: New session 18 of user core. Jan 17 12:10:53.896743 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:10:54.011104 sshd[4394]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:54.015658 systemd[1]: sshd@17-10.0.0.49:22-10.0.0.1:52776.service: Deactivated successfully. Jan 17 12:10:54.017755 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:10:54.018569 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:10:54.019674 systemd-logind[1458]: Removed session 18. Jan 17 12:10:57.315079 kubelet[2607]: E0117 12:10:57.315027 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:59.025179 systemd[1]: Started sshd@18-10.0.0.49:22-10.0.0.1:42796.service - OpenSSH per-connection server daemon (10.0.0.1:42796). Jan 17 12:10:59.063986 sshd[4410]: Accepted publickey for core from 10.0.0.1 port 42796 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:59.065900 sshd[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:59.070184 systemd-logind[1458]: New session 19 of user core. Jan 17 12:10:59.077779 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:10:59.190427 sshd[4410]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:59.194377 systemd[1]: sshd@18-10.0.0.49:22-10.0.0.1:42796.service: Deactivated successfully. Jan 17 12:10:59.196700 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:10:59.197356 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:10:59.198439 systemd-logind[1458]: Removed session 19. Jan 17 12:10:59.314487 kubelet[2607]: I0117 12:10:59.314341 2607 scope.go:117] "RemoveContainer" containerID="acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361" Jan 17 12:10:59.314487 kubelet[2607]: E0117 12:10:59.314420 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:59.314986 kubelet[2607]: E0117 12:10:59.314857 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-hm29z_calico-system(20b3aef6-8302-4600-bbe2-09c056e53e6a)\"" pod="calico-system/calico-node-hm29z" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" Jan 17 12:11:00.315284 containerd[1471]: time="2025-01-17T12:11:00.315180472Z" level=info msg="StopPodSandbox for \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\"" Jan 17 12:11:00.354934 containerd[1471]: time="2025-01-17T12:11:00.354863756Z" level=error msg="StopPodSandbox for \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\" failed" error="failed to destroy network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:11:00.355210 kubelet[2607]: E0117 12:11:00.355146 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:11:00.355564 kubelet[2607]: E0117 12:11:00.355224 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd"} Jan 17 12:11:00.355564 kubelet[2607]: E0117 12:11:00.355304 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6c7969a-d094-4962-9f3d-83a3ce21e375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:11:00.355564 kubelet[2607]: E0117 12:11:00.355336 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6c7969a-d094-4962-9f3d-83a3ce21e375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8975f968-wpfqg" podUID="c6c7969a-d094-4962-9f3d-83a3ce21e375" Jan 17 12:11:01.315144 containerd[1471]: time="2025-01-17T12:11:01.314823649Z" level=info msg="StopPodSandbox for \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\"" Jan 17 12:11:01.315360 containerd[1471]: time="2025-01-17T12:11:01.315307625Z" level=info msg="StopPodSandbox for \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\"" Jan 17 12:11:01.315693 containerd[1471]: time="2025-01-17T12:11:01.315323144Z" level=info msg="StopPodSandbox for \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\"" Jan 17 12:11:01.346786 containerd[1471]: time="2025-01-17T12:11:01.346724401Z" level=error msg="StopPodSandbox for \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\" failed" error="failed to destroy network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:11:01.347092 kubelet[2607]: E0117 12:11:01.347012 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:11:01.347092 kubelet[2607]: E0117 12:11:01.347078 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965"} Jan 17 12:11:01.353379 kubelet[2607]: E0117 12:11:01.347125 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09d6eb3a-b020-453e-a3b2-1c2857fad614\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:11:01.353379 kubelet[2607]: E0117 12:11:01.347154 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09d6eb3a-b020-453e-a3b2-1c2857fad614\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d754b8cc8-79mq6" podUID="09d6eb3a-b020-453e-a3b2-1c2857fad614" Jan 17 12:11:01.353379 kubelet[2607]: E0117 12:11:01.347708 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:11:01.353379 kubelet[2607]: E0117 12:11:01.347738 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce"} Jan 17 12:11:01.353696 containerd[1471]: time="2025-01-17T12:11:01.347473413Z" level=error msg="StopPodSandbox for \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\" failed" error="failed to destroy network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:11:01.353696 containerd[1471]: time="2025-01-17T12:11:01.352558036Z" level=error msg="StopPodSandbox for \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\" failed" error="failed to destroy network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:11:01.353939 kubelet[2607]: E0117 12:11:01.347772 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50e92775-825e-4d1d-9a42-956f2281a0b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:11:01.353939 kubelet[2607]: E0117 12:11:01.347794 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50e92775-825e-4d1d-9a42-956f2281a0b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8975f968-4wqpn" podUID="50e92775-825e-4d1d-9a42-956f2281a0b9" Jan 17 12:11:01.353939 kubelet[2607]: E0117 12:11:01.352871 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:11:01.353939 kubelet[2607]: E0117 12:11:01.352926 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e"} Jan 17 12:11:01.354075 kubelet[2607]: E0117 12:11:01.352961 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf1e6aaa-53c2-4de6-a445-b92ba78d0548\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:11:01.354075 kubelet[2607]: E0117 12:11:01.352990 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf1e6aaa-53c2-4de6-a445-b92ba78d0548\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4grjz" podUID="cf1e6aaa-53c2-4de6-a445-b92ba78d0548" Jan 17 12:11:03.891146 containerd[1471]: time="2025-01-17T12:11:03.891093646Z" level=info msg="StopPodSandbox for \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\"" Jan 17 12:11:03.898846 containerd[1471]: time="2025-01-17T12:11:03.898763672Z" level=info msg="Container to stop \"2eca933ffcc9ae63b0450ee549d366ab107794871058734d94e442b659c2ba15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:11:03.898846 containerd[1471]: time="2025-01-17T12:11:03.898831061Z" level=info msg="Container to stop \"c5f29c1e6edd749af4095f9e21fd5ef896960b3f3bee3152ae090411d9380f2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:11:03.898846 containerd[1471]: time="2025-01-17T12:11:03.898846169Z" level=info msg="Container to stop \"acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:11:03.901917 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04-shm.mount: Deactivated successfully. Jan 17 12:11:03.909769 systemd[1]: cri-containerd-7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04.scope: Deactivated successfully. Jan 17 12:11:03.930975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04-rootfs.mount: Deactivated successfully. Jan 17 12:11:04.003316 containerd[1471]: time="2025-01-17T12:11:04.003233366Z" level=info msg="shim disconnected" id=7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04 namespace=k8s.io Jan 17 12:11:04.003316 containerd[1471]: time="2025-01-17T12:11:04.003313058Z" level=warning msg="cleaning up after shim disconnected" id=7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04 namespace=k8s.io Jan 17 12:11:04.003316 containerd[1471]: time="2025-01-17T12:11:04.003323518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:04.021691 containerd[1471]: time="2025-01-17T12:11:04.021606231Z" level=info msg="TearDown network for sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" successfully" Jan 17 12:11:04.021691 containerd[1471]: time="2025-01-17T12:11:04.021640536Z" level=info msg="StopPodSandbox for \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" returns successfully" Jan 17 12:11:04.071210 kubelet[2607]: I0117 12:11:04.071143 2607 topology_manager.go:215] "Topology Admit Handler" podUID="e3998cf8-9f75-4fdc-8820-fe486080b75a" podNamespace="calico-system" podName="calico-node-cxt47" Jan 17 12:11:04.071799 kubelet[2607]: E0117 12:11:04.071231 2607 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" containerName="flexvol-driver" Jan 17 12:11:04.071799 kubelet[2607]: E0117 12:11:04.071244 2607 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" containerName="install-cni" Jan 17 12:11:04.071799 kubelet[2607]: E0117 12:11:04.071251 2607 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" containerName="calico-node" Jan 17 12:11:04.071799 kubelet[2607]: E0117 12:11:04.071258 2607 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" containerName="calico-node" Jan 17 12:11:04.071799 kubelet[2607]: E0117 12:11:04.071264 2607 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" containerName="calico-node" Jan 17 12:11:04.071799 kubelet[2607]: I0117 12:11:04.071295 2607 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" containerName="calico-node" Jan 17 12:11:04.071799 kubelet[2607]: I0117 12:11:04.071302 2607 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" containerName="calico-node" Jan 17 12:11:04.071799 kubelet[2607]: I0117 12:11:04.071339 2607 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" containerName="calico-node" Jan 17 12:11:04.079702 systemd[1]: Created slice kubepods-besteffort-pode3998cf8_9f75_4fdc_8820_fe486080b75a.slice - libcontainer container kubepods-besteffort-pode3998cf8_9f75_4fdc_8820_fe486080b75a.slice. Jan 17 12:11:04.209076 systemd[1]: Started sshd@19-10.0.0.49:22-10.0.0.1:42808.service - OpenSSH per-connection server daemon (10.0.0.1:42808). Jan 17 12:11:04.228039 kubelet[2607]: I0117 12:11:04.227974 2607 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20b3aef6-8302-4600-bbe2-09c056e53e6a-tigera-ca-bundle\") pod \"20b3aef6-8302-4600-bbe2-09c056e53e6a\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " Jan 17 12:11:04.228039 kubelet[2607]: I0117 12:11:04.228018 2607 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-flexvol-driver-host\") pod \"20b3aef6-8302-4600-bbe2-09c056e53e6a\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " Jan 17 12:11:04.228039 kubelet[2607]: I0117 12:11:04.228040 2607 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-var-run-calico\") pod \"20b3aef6-8302-4600-bbe2-09c056e53e6a\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " Jan 17 12:11:04.228252 kubelet[2607]: I0117 12:11:04.228066 2607 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-cni-log-dir\") pod \"20b3aef6-8302-4600-bbe2-09c056e53e6a\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " Jan 17 12:11:04.228252 kubelet[2607]: I0117 12:11:04.228084 2607 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-cni-net-dir\") pod \"20b3aef6-8302-4600-bbe2-09c056e53e6a\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " Jan 17 12:11:04.228252 kubelet[2607]: I0117 12:11:04.228102 2607 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-xtables-lock\") pod \"20b3aef6-8302-4600-bbe2-09c056e53e6a\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " Jan 17 12:11:04.228252 kubelet[2607]: I0117 12:11:04.228119 2607 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-policysync\") pod \"20b3aef6-8302-4600-bbe2-09c056e53e6a\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " Jan 17 12:11:04.228252 kubelet[2607]: I0117 12:11:04.228138 2607 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-cni-bin-dir\") pod \"20b3aef6-8302-4600-bbe2-09c056e53e6a\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " Jan 17 12:11:04.228252 kubelet[2607]: I0117 12:11:04.228133 2607 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "20b3aef6-8302-4600-bbe2-09c056e53e6a" (UID: "20b3aef6-8302-4600-bbe2-09c056e53e6a"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:04.228452 kubelet[2607]: I0117 12:11:04.228154 2607 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-lib-modules\") pod \"20b3aef6-8302-4600-bbe2-09c056e53e6a\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " Jan 17 12:11:04.228452 kubelet[2607]: I0117 12:11:04.228171 2607 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-var-lib-calico\") pod \"20b3aef6-8302-4600-bbe2-09c056e53e6a\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " Jan 17 12:11:04.228452 kubelet[2607]: I0117 12:11:04.228190 2607 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "20b3aef6-8302-4600-bbe2-09c056e53e6a" (UID: "20b3aef6-8302-4600-bbe2-09c056e53e6a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:04.228452 kubelet[2607]: I0117 12:11:04.228193 2607 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/20b3aef6-8302-4600-bbe2-09c056e53e6a-node-certs\") pod \"20b3aef6-8302-4600-bbe2-09c056e53e6a\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " Jan 17 12:11:04.228452 kubelet[2607]: I0117 12:11:04.228230 2607 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr48w\" (UniqueName: \"kubernetes.io/projected/20b3aef6-8302-4600-bbe2-09c056e53e6a-kube-api-access-kr48w\") pod \"20b3aef6-8302-4600-bbe2-09c056e53e6a\" (UID: \"20b3aef6-8302-4600-bbe2-09c056e53e6a\") " Jan 17 12:11:04.228452 kubelet[2607]: I0117 12:11:04.228298 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e3998cf8-9f75-4fdc-8820-fe486080b75a-flexvol-driver-host\") pod \"calico-node-cxt47\" (UID: \"e3998cf8-9f75-4fdc-8820-fe486080b75a\") " pod="calico-system/calico-node-cxt47" Jan 17 12:11:04.228684 kubelet[2607]: I0117 12:11:04.228325 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3998cf8-9f75-4fdc-8820-fe486080b75a-tigera-ca-bundle\") pod \"calico-node-cxt47\" (UID: \"e3998cf8-9f75-4fdc-8820-fe486080b75a\") " pod="calico-system/calico-node-cxt47" Jan 17 12:11:04.228684 kubelet[2607]: I0117 12:11:04.228347 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e3998cf8-9f75-4fdc-8820-fe486080b75a-policysync\") pod \"calico-node-cxt47\" (UID: \"e3998cf8-9f75-4fdc-8820-fe486080b75a\") " pod="calico-system/calico-node-cxt47" Jan 17 12:11:04.228684 kubelet[2607]: I0117 12:11:04.228368 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e3998cf8-9f75-4fdc-8820-fe486080b75a-cni-net-dir\") pod \"calico-node-cxt47\" (UID: \"e3998cf8-9f75-4fdc-8820-fe486080b75a\") " pod="calico-system/calico-node-cxt47" Jan 17 12:11:04.228684 kubelet[2607]: I0117 12:11:04.228389 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3998cf8-9f75-4fdc-8820-fe486080b75a-xtables-lock\") pod \"calico-node-cxt47\" (UID: \"e3998cf8-9f75-4fdc-8820-fe486080b75a\") " pod="calico-system/calico-node-cxt47" Jan 17 12:11:04.228684 kubelet[2607]: I0117 12:11:04.228411 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e3998cf8-9f75-4fdc-8820-fe486080b75a-node-certs\") pod \"calico-node-cxt47\" (UID: \"e3998cf8-9f75-4fdc-8820-fe486080b75a\") " pod="calico-system/calico-node-cxt47" Jan 17 12:11:04.228839 kubelet[2607]: I0117 12:11:04.228430 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e3998cf8-9f75-4fdc-8820-fe486080b75a-cni-bin-dir\") pod \"calico-node-cxt47\" (UID: \"e3998cf8-9f75-4fdc-8820-fe486080b75a\") " pod="calico-system/calico-node-cxt47" Jan 17 12:11:04.228839 kubelet[2607]: I0117 12:11:04.228454 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e3998cf8-9f75-4fdc-8820-fe486080b75a-var-lib-calico\") pod \"calico-node-cxt47\" (UID: \"e3998cf8-9f75-4fdc-8820-fe486080b75a\") " pod="calico-system/calico-node-cxt47" Jan 17 12:11:04.228839 kubelet[2607]: I0117 12:11:04.228475 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e3998cf8-9f75-4fdc-8820-fe486080b75a-var-run-calico\") pod \"calico-node-cxt47\" (UID: \"e3998cf8-9f75-4fdc-8820-fe486080b75a\") " pod="calico-system/calico-node-cxt47" Jan 17 12:11:04.228839 kubelet[2607]: I0117 12:11:04.228498 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3998cf8-9f75-4fdc-8820-fe486080b75a-lib-modules\") pod \"calico-node-cxt47\" (UID: \"e3998cf8-9f75-4fdc-8820-fe486080b75a\") " pod="calico-system/calico-node-cxt47" Jan 17 12:11:04.228839 kubelet[2607]: I0117 12:11:04.228516 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e3998cf8-9f75-4fdc-8820-fe486080b75a-cni-log-dir\") pod \"calico-node-cxt47\" (UID: \"e3998cf8-9f75-4fdc-8820-fe486080b75a\") " pod="calico-system/calico-node-cxt47" Jan 17 12:11:04.228973 kubelet[2607]: I0117 12:11:04.228538 2607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqvlw\" (UniqueName: \"kubernetes.io/projected/e3998cf8-9f75-4fdc-8820-fe486080b75a-kube-api-access-pqvlw\") pod \"calico-node-cxt47\" (UID: \"e3998cf8-9f75-4fdc-8820-fe486080b75a\") " pod="calico-system/calico-node-cxt47" Jan 17 12:11:04.228973 kubelet[2607]: I0117 12:11:04.228562 2607 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:04.228973 kubelet[2607]: I0117 12:11:04.228558 2607 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "20b3aef6-8302-4600-bbe2-09c056e53e6a" (UID: "20b3aef6-8302-4600-bbe2-09c056e53e6a"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:04.228973 kubelet[2607]: I0117 12:11:04.228575 2607 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:04.228973 kubelet[2607]: I0117 12:11:04.228613 2607 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-policysync" (OuterVolumeSpecName: "policysync") pod "20b3aef6-8302-4600-bbe2-09c056e53e6a" (UID: "20b3aef6-8302-4600-bbe2-09c056e53e6a"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:04.228973 kubelet[2607]: I0117 12:11:04.228636 2607 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "20b3aef6-8302-4600-bbe2-09c056e53e6a" (UID: "20b3aef6-8302-4600-bbe2-09c056e53e6a"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:04.229149 kubelet[2607]: I0117 12:11:04.228637 2607 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "20b3aef6-8302-4600-bbe2-09c056e53e6a" (UID: "20b3aef6-8302-4600-bbe2-09c056e53e6a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:04.229149 kubelet[2607]: I0117 12:11:04.228668 2607 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "20b3aef6-8302-4600-bbe2-09c056e53e6a" (UID: "20b3aef6-8302-4600-bbe2-09c056e53e6a"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:04.229149 kubelet[2607]: I0117 12:11:04.228683 2607 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "20b3aef6-8302-4600-bbe2-09c056e53e6a" (UID: "20b3aef6-8302-4600-bbe2-09c056e53e6a"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:04.229449 kubelet[2607]: I0117 12:11:04.229427 2607 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "20b3aef6-8302-4600-bbe2-09c056e53e6a" (UID: "20b3aef6-8302-4600-bbe2-09c056e53e6a"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:04.237607 kubelet[2607]: I0117 12:11:04.234755 2607 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b3aef6-8302-4600-bbe2-09c056e53e6a-node-certs" (OuterVolumeSpecName: "node-certs") pod "20b3aef6-8302-4600-bbe2-09c056e53e6a" (UID: "20b3aef6-8302-4600-bbe2-09c056e53e6a"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:11:04.237607 kubelet[2607]: I0117 12:11:04.234997 2607 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20b3aef6-8302-4600-bbe2-09c056e53e6a-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "20b3aef6-8302-4600-bbe2-09c056e53e6a" (UID: "20b3aef6-8302-4600-bbe2-09c056e53e6a"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:11:04.235502 systemd[1]: var-lib-kubelet-pods-20b3aef6\x2d8302\x2d4600\x2dbbe2\x2d09c056e53e6a-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 17 12:11:04.237989 kubelet[2607]: I0117 12:11:04.237961 2607 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b3aef6-8302-4600-bbe2-09c056e53e6a-kube-api-access-kr48w" (OuterVolumeSpecName: "kube-api-access-kr48w") pod "20b3aef6-8302-4600-bbe2-09c056e53e6a" (UID: "20b3aef6-8302-4600-bbe2-09c056e53e6a"). InnerVolumeSpecName "kube-api-access-kr48w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:11:04.239243 systemd[1]: var-lib-kubelet-pods-20b3aef6\x2d8302\x2d4600\x2dbbe2\x2d09c056e53e6a-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 17 12:11:04.239378 systemd[1]: var-lib-kubelet-pods-20b3aef6\x2d8302\x2d4600\x2dbbe2\x2d09c056e53e6a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkr48w.mount: Deactivated successfully. Jan 17 12:11:04.259514 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 42808 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:04.261868 sshd[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:04.267150 systemd-logind[1458]: New session 20 of user core. Jan 17 12:11:04.273812 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:11:04.315051 containerd[1471]: time="2025-01-17T12:11:04.314843388Z" level=info msg="StopPodSandbox for \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\"" Jan 17 12:11:04.324413 systemd[1]: Removed slice kubepods-besteffort-pod20b3aef6_8302_4600_bbe2_09c056e53e6a.slice - libcontainer container kubepods-besteffort-pod20b3aef6_8302_4600_bbe2_09c056e53e6a.slice. Jan 17 12:11:04.330473 kubelet[2607]: I0117 12:11:04.329648 2607 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:04.330473 kubelet[2607]: I0117 12:11:04.329683 2607 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:04.330473 kubelet[2607]: I0117 12:11:04.329693 2607 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:04.330473 kubelet[2607]: I0117 12:11:04.329703 2607 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20b3aef6-8302-4600-bbe2-09c056e53e6a-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:04.330473 kubelet[2607]: I0117 12:11:04.329711 2607 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:04.330473 kubelet[2607]: I0117 12:11:04.329719 2607 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-policysync\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:04.330473 kubelet[2607]: I0117 12:11:04.329727 2607 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:04.330473 kubelet[2607]: I0117 12:11:04.329736 2607 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/20b3aef6-8302-4600-bbe2-09c056e53e6a-node-certs\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:04.330781 kubelet[2607]: I0117 12:11:04.329744 2607 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20b3aef6-8302-4600-bbe2-09c056e53e6a-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:04.330781 kubelet[2607]: I0117 12:11:04.329752 2607 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kr48w\" (UniqueName: \"kubernetes.io/projected/20b3aef6-8302-4600-bbe2-09c056e53e6a-kube-api-access-kr48w\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:04.350114 containerd[1471]: time="2025-01-17T12:11:04.350058215Z" level=error msg="StopPodSandbox for \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\" failed" error="failed to destroy network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:11:04.350539 kubelet[2607]: E0117 12:11:04.350489 2607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:11:04.350691 kubelet[2607]: E0117 12:11:04.350670 2607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708"} Jan 17 12:11:04.350824 kubelet[2607]: E0117 12:11:04.350752 2607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d963b6e-e967-461b-88c1-043d231c7107\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:11:04.350824 kubelet[2607]: E0117 12:11:04.350788 2607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d963b6e-e967-461b-88c1-043d231c7107\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hmksg" podUID="7d963b6e-e967-461b-88c1-043d231c7107" Jan 17 12:11:04.384037 kubelet[2607]: E0117 12:11:04.383998 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:04.384803 containerd[1471]: time="2025-01-17T12:11:04.384764230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cxt47,Uid:e3998cf8-9f75-4fdc-8820-fe486080b75a,Namespace:calico-system,Attempt:0,}" Jan 17 12:11:04.489525 sshd[4547]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:04.496865 systemd[1]: sshd@19-10.0.0.49:22-10.0.0.1:42808.service: Deactivated successfully. Jan 17 12:11:04.498903 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:11:04.499519 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:11:04.500887 systemd-logind[1458]: Removed session 20. Jan 17 12:11:04.504447 containerd[1471]: time="2025-01-17T12:11:04.504158054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:04.504447 containerd[1471]: time="2025-01-17T12:11:04.504208781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:04.504447 containerd[1471]: time="2025-01-17T12:11:04.504221475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:04.504447 containerd[1471]: time="2025-01-17T12:11:04.504305215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:04.527765 systemd[1]: Started cri-containerd-5addccc9b060880c165dfc28aba256c23efc20d4732ec8c02ba9cd2d923cad62.scope - libcontainer container 5addccc9b060880c165dfc28aba256c23efc20d4732ec8c02ba9cd2d923cad62. Jan 17 12:11:04.550798 containerd[1471]: time="2025-01-17T12:11:04.550761455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cxt47,Uid:e3998cf8-9f75-4fdc-8820-fe486080b75a,Namespace:calico-system,Attempt:0,} returns sandbox id \"5addccc9b060880c165dfc28aba256c23efc20d4732ec8c02ba9cd2d923cad62\"" Jan 17 12:11:04.551568 kubelet[2607]: E0117 12:11:04.551543 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:04.551976 kubelet[2607]: I0117 12:11:04.551881 2607 scope.go:117] "RemoveContainer" containerID="acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361" Jan 17 12:11:04.553858 containerd[1471]: time="2025-01-17T12:11:04.553810379Z" level=info msg="CreateContainer within sandbox \"5addccc9b060880c165dfc28aba256c23efc20d4732ec8c02ba9cd2d923cad62\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:11:04.555756 containerd[1471]: time="2025-01-17T12:11:04.555684299Z" level=info msg="RemoveContainer for \"acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361\"" Jan 17 12:11:04.626419 containerd[1471]: time="2025-01-17T12:11:04.626343071Z" level=info msg="RemoveContainer for \"acff4535505c1b4c16c39f0a4343e2f6a547325a15b2d793c6fe5643db143361\" returns successfully" Jan 17 12:11:04.626794 kubelet[2607]: I0117 12:11:04.626627 2607 scope.go:117] "RemoveContainer" containerID="2eca933ffcc9ae63b0450ee549d366ab107794871058734d94e442b659c2ba15" Jan 17 12:11:04.628037 containerd[1471]: time="2025-01-17T12:11:04.627985528Z" level=info msg="RemoveContainer for \"2eca933ffcc9ae63b0450ee549d366ab107794871058734d94e442b659c2ba15\"" Jan 17 12:11:04.649848 containerd[1471]: time="2025-01-17T12:11:04.649570048Z" level=info msg="RemoveContainer for \"2eca933ffcc9ae63b0450ee549d366ab107794871058734d94e442b659c2ba15\" returns successfully" Jan 17 12:11:04.652096 kubelet[2607]: I0117 12:11:04.652037 2607 scope.go:117] "RemoveContainer" containerID="c5f29c1e6edd749af4095f9e21fd5ef896960b3f3bee3152ae090411d9380f2c" Jan 17 12:11:04.656432 containerd[1471]: time="2025-01-17T12:11:04.656179834Z" level=info msg="RemoveContainer for \"c5f29c1e6edd749af4095f9e21fd5ef896960b3f3bee3152ae090411d9380f2c\"" Jan 17 12:11:04.670616 containerd[1471]: time="2025-01-17T12:11:04.668145140Z" level=info msg="RemoveContainer for \"c5f29c1e6edd749af4095f9e21fd5ef896960b3f3bee3152ae090411d9380f2c\" returns successfully" Jan 17 12:11:04.689756 containerd[1471]: time="2025-01-17T12:11:04.689710374Z" level=info msg="CreateContainer within sandbox \"5addccc9b060880c165dfc28aba256c23efc20d4732ec8c02ba9cd2d923cad62\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e3e6ff21853ced60bb0616ac1a50ab31376ff0c4696ff8c804e3b4462c661c04\"" Jan 17 12:11:04.691373 containerd[1471]: time="2025-01-17T12:11:04.690459495Z" level=info msg="StartContainer for \"e3e6ff21853ced60bb0616ac1a50ab31376ff0c4696ff8c804e3b4462c661c04\"" Jan 17 12:11:04.720751 systemd[1]: Started cri-containerd-e3e6ff21853ced60bb0616ac1a50ab31376ff0c4696ff8c804e3b4462c661c04.scope - libcontainer container e3e6ff21853ced60bb0616ac1a50ab31376ff0c4696ff8c804e3b4462c661c04. Jan 17 12:11:04.749980 containerd[1471]: time="2025-01-17T12:11:04.749791790Z" level=info msg="StartContainer for \"e3e6ff21853ced60bb0616ac1a50ab31376ff0c4696ff8c804e3b4462c661c04\" returns successfully" Jan 17 12:11:04.800074 systemd[1]: cri-containerd-e3e6ff21853ced60bb0616ac1a50ab31376ff0c4696ff8c804e3b4462c661c04.scope: Deactivated successfully. Jan 17 12:11:04.839626 containerd[1471]: time="2025-01-17T12:11:04.839465554Z" level=info msg="shim disconnected" id=e3e6ff21853ced60bb0616ac1a50ab31376ff0c4696ff8c804e3b4462c661c04 namespace=k8s.io Jan 17 12:11:04.840118 containerd[1471]: time="2025-01-17T12:11:04.840068105Z" level=warning msg="cleaning up after shim disconnected" id=e3e6ff21853ced60bb0616ac1a50ab31376ff0c4696ff8c804e3b4462c661c04 namespace=k8s.io Jan 17 12:11:04.840157 containerd[1471]: time="2025-01-17T12:11:04.840116618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:05.556032 kubelet[2607]: E0117 12:11:05.555634 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:05.557918 containerd[1471]: time="2025-01-17T12:11:05.557866904Z" level=info msg="CreateContainer within sandbox \"5addccc9b060880c165dfc28aba256c23efc20d4732ec8c02ba9cd2d923cad62\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:11:05.574733 containerd[1471]: time="2025-01-17T12:11:05.574648971Z" level=info msg="CreateContainer within sandbox \"5addccc9b060880c165dfc28aba256c23efc20d4732ec8c02ba9cd2d923cad62\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d297049e6da792a8cf37e624038dfe20f64ba2fbd0c527e7a4491a8c03da4be2\"" Jan 17 12:11:05.575272 containerd[1471]: time="2025-01-17T12:11:05.575216273Z" level=info msg="StartContainer for \"d297049e6da792a8cf37e624038dfe20f64ba2fbd0c527e7a4491a8c03da4be2\"" Jan 17 12:11:05.606791 systemd[1]: Started cri-containerd-d297049e6da792a8cf37e624038dfe20f64ba2fbd0c527e7a4491a8c03da4be2.scope - libcontainer container d297049e6da792a8cf37e624038dfe20f64ba2fbd0c527e7a4491a8c03da4be2. Jan 17 12:11:05.642258 containerd[1471]: time="2025-01-17T12:11:05.642209580Z" level=info msg="StartContainer for \"d297049e6da792a8cf37e624038dfe20f64ba2fbd0c527e7a4491a8c03da4be2\" returns successfully" Jan 17 12:11:06.164180 systemd[1]: cri-containerd-d297049e6da792a8cf37e624038dfe20f64ba2fbd0c527e7a4491a8c03da4be2.scope: Deactivated successfully. Jan 17 12:11:06.184434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d297049e6da792a8cf37e624038dfe20f64ba2fbd0c527e7a4491a8c03da4be2-rootfs.mount: Deactivated successfully. Jan 17 12:11:06.295407 containerd[1471]: time="2025-01-17T12:11:06.295330630Z" level=info msg="shim disconnected" id=d297049e6da792a8cf37e624038dfe20f64ba2fbd0c527e7a4491a8c03da4be2 namespace=k8s.io Jan 17 12:11:06.295407 containerd[1471]: time="2025-01-17T12:11:06.295399170Z" level=warning msg="cleaning up after shim disconnected" id=d297049e6da792a8cf37e624038dfe20f64ba2fbd0c527e7a4491a8c03da4be2 namespace=k8s.io Jan 17 12:11:06.295407 containerd[1471]: time="2025-01-17T12:11:06.295410041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:06.370844 kubelet[2607]: I0117 12:11:06.370788 2607 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b3aef6-8302-4600-bbe2-09c056e53e6a" path="/var/lib/kubelet/pods/20b3aef6-8302-4600-bbe2-09c056e53e6a/volumes" Jan 17 12:11:06.560216 kubelet[2607]: E0117 12:11:06.560172 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:06.571911 containerd[1471]: time="2025-01-17T12:11:06.571859571Z" level=info msg="CreateContainer within sandbox \"5addccc9b060880c165dfc28aba256c23efc20d4732ec8c02ba9cd2d923cad62\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:11:07.024426 containerd[1471]: time="2025-01-17T12:11:07.024350777Z" level=info msg="CreateContainer within sandbox \"5addccc9b060880c165dfc28aba256c23efc20d4732ec8c02ba9cd2d923cad62\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"55bcd0a2f44fecbda6fcdf6e7908683eef9edf1cdc6a49d646e848c17280d657\"" Jan 17 12:11:07.025139 containerd[1471]: time="2025-01-17T12:11:07.025073175Z" level=info msg="StartContainer for \"55bcd0a2f44fecbda6fcdf6e7908683eef9edf1cdc6a49d646e848c17280d657\"" Jan 17 12:11:07.052773 systemd[1]: Started cri-containerd-55bcd0a2f44fecbda6fcdf6e7908683eef9edf1cdc6a49d646e848c17280d657.scope - libcontainer container 55bcd0a2f44fecbda6fcdf6e7908683eef9edf1cdc6a49d646e848c17280d657. Jan 17 12:11:07.228327 containerd[1471]: time="2025-01-17T12:11:07.228271722Z" level=info msg="StartContainer for \"55bcd0a2f44fecbda6fcdf6e7908683eef9edf1cdc6a49d646e848c17280d657\" returns successfully" Jan 17 12:11:07.564370 kubelet[2607]: E0117 12:11:07.564312 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:07.622779 kubelet[2607]: I0117 12:11:07.622693 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cxt47" podStartSLOduration=3.622649099 podStartE2EDuration="3.622649099s" podCreationTimestamp="2025-01-17 12:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:11:07.618799252 +0000 UTC m=+85.406295016" watchObservedRunningTime="2025-01-17 12:11:07.622649099 +0000 UTC m=+85.410144853" Jan 17 12:11:08.315728 containerd[1471]: time="2025-01-17T12:11:08.315665861Z" level=info msg="StopPodSandbox for \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\"" Jan 17 12:11:08.474154 containerd[1471]: 2025-01-17 12:11:08.435 [INFO][4861] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:11:08.474154 containerd[1471]: 2025-01-17 12:11:08.435 [INFO][4861] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" iface="eth0" netns="/var/run/netns/cni-61a6d99e-6106-206e-e8ad-721916cc3e00" Jan 17 12:11:08.474154 containerd[1471]: 2025-01-17 12:11:08.435 [INFO][4861] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" iface="eth0" netns="/var/run/netns/cni-61a6d99e-6106-206e-e8ad-721916cc3e00" Jan 17 12:11:08.474154 containerd[1471]: 2025-01-17 12:11:08.436 [INFO][4861] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" iface="eth0" netns="/var/run/netns/cni-61a6d99e-6106-206e-e8ad-721916cc3e00" Jan 17 12:11:08.474154 containerd[1471]: 2025-01-17 12:11:08.436 [INFO][4861] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:11:08.474154 containerd[1471]: 2025-01-17 12:11:08.436 [INFO][4861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:11:08.474154 containerd[1471]: 2025-01-17 12:11:08.462 [INFO][4869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" HandleID="k8s-pod-network.da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Workload="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:08.474154 containerd[1471]: 2025-01-17 12:11:08.462 [INFO][4869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:08.474154 containerd[1471]: 2025-01-17 12:11:08.462 [INFO][4869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:08.474154 containerd[1471]: 2025-01-17 12:11:08.467 [WARNING][4869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" HandleID="k8s-pod-network.da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Workload="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:08.474154 containerd[1471]: 2025-01-17 12:11:08.467 [INFO][4869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" HandleID="k8s-pod-network.da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Workload="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:08.474154 containerd[1471]: 2025-01-17 12:11:08.468 [INFO][4869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:08.474154 containerd[1471]: 2025-01-17 12:11:08.471 [INFO][4861] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:11:08.474564 containerd[1471]: time="2025-01-17T12:11:08.474363023Z" level=info msg="TearDown network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\" successfully" Jan 17 12:11:08.474564 containerd[1471]: time="2025-01-17T12:11:08.474396096Z" level=info msg="StopPodSandbox for \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\" returns successfully" Jan 17 12:11:08.475265 containerd[1471]: time="2025-01-17T12:11:08.475213857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xfsj8,Uid:83490bae-2f03-49cc-b16c-ff7f265ed80b,Namespace:calico-system,Attempt:1,}" Jan 17 12:11:08.477219 systemd[1]: run-netns-cni\x2d61a6d99e\x2d6106\x2d206e\x2de8ad\x2d721916cc3e00.mount: Deactivated successfully. Jan 17 12:11:08.565679 kubelet[2607]: E0117 12:11:08.565632 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:08.782227 systemd-networkd[1409]: calibd889c8f326: Link UP Jan 17 12:11:08.782463 systemd-networkd[1409]: calibd889c8f326: Gained carrier Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.505 [INFO][4877] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.514 [INFO][4877] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xfsj8-eth0 csi-node-driver- calico-system 83490bae-2f03-49cc-b16c-ff7f265ed80b 1134 0 2025-01-17 12:10:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xfsj8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibd889c8f326 [] []}} ContainerID="6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" Namespace="calico-system" Pod="csi-node-driver-xfsj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xfsj8-" Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.515 [INFO][4877] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" Namespace="calico-system" Pod="csi-node-driver-xfsj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.543 [INFO][4890] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" HandleID="k8s-pod-network.6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" Workload="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.630 [INFO][4890] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" HandleID="k8s-pod-network.6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" Workload="localhost-k8s-csi--node--driver--xfsj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f43f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xfsj8", "timestamp":"2025-01-17 12:11:08.543968622 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.630 [INFO][4890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.630 [INFO][4890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.630 [INFO][4890] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.632 [INFO][4890] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" host="localhost" Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.675 [INFO][4890] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.679 [INFO][4890] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.680 [INFO][4890] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.682 [INFO][4890] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.682 [INFO][4890] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" host="localhost" Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.683 [INFO][4890] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.701 [INFO][4890] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" host="localhost" Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.769 [INFO][4890] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" host="localhost" Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.769 [INFO][4890] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" host="localhost" Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.769 [INFO][4890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:08.814342 containerd[1471]: 2025-01-17 12:11:08.769 [INFO][4890] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" HandleID="k8s-pod-network.6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" Workload="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:08.815110 containerd[1471]: 2025-01-17 12:11:08.774 [INFO][4877] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" Namespace="calico-system" Pod="csi-node-driver-xfsj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xfsj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xfsj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83490bae-2f03-49cc-b16c-ff7f265ed80b", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xfsj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd889c8f326", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:08.815110 containerd[1471]: 2025-01-17 12:11:08.775 [INFO][4877] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" Namespace="calico-system" Pod="csi-node-driver-xfsj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:08.815110 containerd[1471]: 2025-01-17 12:11:08.775 [INFO][4877] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd889c8f326 ContainerID="6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" Namespace="calico-system" Pod="csi-node-driver-xfsj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:08.815110 containerd[1471]: 2025-01-17 12:11:08.781 [INFO][4877] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" Namespace="calico-system" Pod="csi-node-driver-xfsj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:08.815110 containerd[1471]: 2025-01-17 12:11:08.782 [INFO][4877] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" Namespace="calico-system" Pod="csi-node-driver-xfsj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xfsj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xfsj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83490bae-2f03-49cc-b16c-ff7f265ed80b", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd", Pod:"csi-node-driver-xfsj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd889c8f326", MAC:"56:31:e9:72:cd:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:08.815110 containerd[1471]: 2025-01-17 12:11:08.811 [INFO][4877] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd" Namespace="calico-system" Pod="csi-node-driver-xfsj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:08.847533 containerd[1471]: time="2025-01-17T12:11:08.847391743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:08.847533 containerd[1471]: time="2025-01-17T12:11:08.847449543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:08.847533 containerd[1471]: time="2025-01-17T12:11:08.847476364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:08.847804 containerd[1471]: time="2025-01-17T12:11:08.847567177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:08.869736 systemd[1]: Started cri-containerd-6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd.scope - libcontainer container 6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd. Jan 17 12:11:08.883245 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:11:08.895132 containerd[1471]: time="2025-01-17T12:11:08.895087012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xfsj8,Uid:83490bae-2f03-49cc-b16c-ff7f265ed80b,Namespace:calico-system,Attempt:1,} returns sandbox id \"6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd\"" Jan 17 12:11:08.897431 containerd[1471]: time="2025-01-17T12:11:08.897132144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:11:09.223855 kernel: bpftool[5099]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:11:09.481165 systemd-networkd[1409]: vxlan.calico: Link UP Jan 17 12:11:09.481175 systemd-networkd[1409]: vxlan.calico: Gained carrier Jan 17 12:11:09.509958 systemd[1]: Started sshd@20-10.0.0.49:22-10.0.0.1:55490.service - OpenSSH per-connection server daemon (10.0.0.1:55490). Jan 17 12:11:09.556166 sshd[5132]: Accepted publickey for core from 10.0.0.1 port 55490 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:09.558506 sshd[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:09.564375 systemd-logind[1458]: New session 21 of user core. Jan 17 12:11:09.569788 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:11:09.711008 sshd[5132]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:09.714427 systemd[1]: sshd@20-10.0.0.49:22-10.0.0.1:55490.service: Deactivated successfully. Jan 17 12:11:09.716802 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:11:09.718645 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:11:09.719769 systemd-logind[1458]: Removed session 21. Jan 17 12:11:10.593779 systemd-networkd[1409]: calibd889c8f326: Gained IPv6LL Jan 17 12:11:10.721750 systemd-networkd[1409]: vxlan.calico: Gained IPv6LL Jan 17 12:11:10.906244 containerd[1471]: time="2025-01-17T12:11:10.906105644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:10.907005 containerd[1471]: time="2025-01-17T12:11:10.906951076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:11:10.908188 containerd[1471]: time="2025-01-17T12:11:10.908154871Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:10.913348 containerd[1471]: time="2025-01-17T12:11:10.913296394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:10.913809 containerd[1471]: time="2025-01-17T12:11:10.913769546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.016590083s" Jan 17 12:11:10.913887 containerd[1471]: time="2025-01-17T12:11:10.913809041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:11:10.915684 containerd[1471]: time="2025-01-17T12:11:10.915643329Z" level=info msg="CreateContainer within sandbox \"6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:11:10.932982 containerd[1471]: time="2025-01-17T12:11:10.932932773Z" level=info msg="CreateContainer within sandbox \"6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0edbdeea8636c36675644fda0b7bf14e40d2aeca13373de281e2bc0869f83a4c\"" Jan 17 12:11:10.933466 containerd[1471]: time="2025-01-17T12:11:10.933442454Z" level=info msg="StartContainer for \"0edbdeea8636c36675644fda0b7bf14e40d2aeca13373de281e2bc0869f83a4c\"" Jan 17 12:11:10.968743 systemd[1]: Started cri-containerd-0edbdeea8636c36675644fda0b7bf14e40d2aeca13373de281e2bc0869f83a4c.scope - libcontainer container 0edbdeea8636c36675644fda0b7bf14e40d2aeca13373de281e2bc0869f83a4c. Jan 17 12:11:11.041494 containerd[1471]: time="2025-01-17T12:11:11.041438782Z" level=info msg="StartContainer for \"0edbdeea8636c36675644fda0b7bf14e40d2aeca13373de281e2bc0869f83a4c\" returns successfully" Jan 17 12:11:11.042669 containerd[1471]: time="2025-01-17T12:11:11.042637587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:11:11.314920 kubelet[2607]: E0117 12:11:11.314871 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:11.315431 containerd[1471]: time="2025-01-17T12:11:11.315175879Z" level=info msg="StopPodSandbox for \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\"" Jan 17 12:11:11.637807 containerd[1471]: 2025-01-17 12:11:11.601 [INFO][5242] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:11:11.637807 containerd[1471]: 2025-01-17 12:11:11.602 [INFO][5242] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" iface="eth0" netns="/var/run/netns/cni-bfd8aba7-1b47-d0d6-b837-2ea113d8ddd9" Jan 17 12:11:11.637807 containerd[1471]: 2025-01-17 12:11:11.602 [INFO][5242] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" iface="eth0" netns="/var/run/netns/cni-bfd8aba7-1b47-d0d6-b837-2ea113d8ddd9" Jan 17 12:11:11.637807 containerd[1471]: 2025-01-17 12:11:11.603 [INFO][5242] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" iface="eth0" netns="/var/run/netns/cni-bfd8aba7-1b47-d0d6-b837-2ea113d8ddd9" Jan 17 12:11:11.637807 containerd[1471]: 2025-01-17 12:11:11.603 [INFO][5242] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:11:11.637807 containerd[1471]: 2025-01-17 12:11:11.603 [INFO][5242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:11:11.637807 containerd[1471]: 2025-01-17 12:11:11.623 [INFO][5249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" HandleID="k8s-pod-network.469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Workload="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:11.637807 containerd[1471]: 2025-01-17 12:11:11.623 [INFO][5249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:11.637807 containerd[1471]: 2025-01-17 12:11:11.623 [INFO][5249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:11.637807 containerd[1471]: 2025-01-17 12:11:11.630 [WARNING][5249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" HandleID="k8s-pod-network.469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Workload="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:11.637807 containerd[1471]: 2025-01-17 12:11:11.630 [INFO][5249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" HandleID="k8s-pod-network.469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Workload="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:11.637807 containerd[1471]: 2025-01-17 12:11:11.632 [INFO][5249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:11.637807 containerd[1471]: 2025-01-17 12:11:11.634 [INFO][5242] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:11:11.638288 containerd[1471]: time="2025-01-17T12:11:11.638137071Z" level=info msg="TearDown network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\" successfully" Jan 17 12:11:11.638288 containerd[1471]: time="2025-01-17T12:11:11.638174894Z" level=info msg="StopPodSandbox for \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\" returns successfully" Jan 17 12:11:11.639227 containerd[1471]: time="2025-01-17T12:11:11.639199907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8975f968-wpfqg,Uid:c6c7969a-d094-4962-9f3d-83a3ce21e375,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:11:11.640474 systemd[1]: run-netns-cni\x2dbfd8aba7\x2d1b47\x2dd0d6\x2db837\x2d2ea113d8ddd9.mount: Deactivated successfully. Jan 17 12:11:11.782433 systemd-networkd[1409]: cali022a8eb8e5f: Link UP Jan 17 12:11:11.783489 systemd-networkd[1409]: cali022a8eb8e5f: Gained carrier Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.709 [INFO][5258] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0 calico-apiserver-c8975f968- calico-apiserver c6c7969a-d094-4962-9f3d-83a3ce21e375 1154 0 2025-01-17 12:10:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c8975f968 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c8975f968-wpfqg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali022a8eb8e5f [] []}} ContainerID="1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-wpfqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--wpfqg-" Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.710 [INFO][5258] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-wpfqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.739 [INFO][5272] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" HandleID="k8s-pod-network.1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" Workload="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.750 [INFO][5272] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" HandleID="k8s-pod-network.1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" Workload="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ad030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c8975f968-wpfqg", "timestamp":"2025-01-17 12:11:11.739435144 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.750 [INFO][5272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.750 [INFO][5272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.750 [INFO][5272] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.753 [INFO][5272] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" host="localhost" Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.758 [INFO][5272] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.763 [INFO][5272] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.764 [INFO][5272] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.766 [INFO][5272] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.766 [INFO][5272] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" host="localhost" Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.767 [INFO][5272] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34 Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.771 [INFO][5272] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" host="localhost" Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.775 [INFO][5272] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" host="localhost" Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.775 [INFO][5272] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" host="localhost" Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.775 [INFO][5272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:11.799009 containerd[1471]: 2025-01-17 12:11:11.775 [INFO][5272] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" HandleID="k8s-pod-network.1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" Workload="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:11.799888 containerd[1471]: 2025-01-17 12:11:11.779 [INFO][5258] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-wpfqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0", GenerateName:"calico-apiserver-c8975f968-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6c7969a-d094-4962-9f3d-83a3ce21e375", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8975f968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c8975f968-wpfqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali022a8eb8e5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:11.799888 containerd[1471]: 2025-01-17 12:11:11.779 [INFO][5258] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-wpfqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:11.799888 containerd[1471]: 2025-01-17 12:11:11.779 [INFO][5258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali022a8eb8e5f ContainerID="1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-wpfqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:11.799888 containerd[1471]: 2025-01-17 12:11:11.783 [INFO][5258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-wpfqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:11.799888 containerd[1471]: 2025-01-17 12:11:11.784 [INFO][5258] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-wpfqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0", GenerateName:"calico-apiserver-c8975f968-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6c7969a-d094-4962-9f3d-83a3ce21e375", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8975f968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34", Pod:"calico-apiserver-c8975f968-wpfqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali022a8eb8e5f", MAC:"ee:d8:bd:38:2e:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:11.799888 containerd[1471]: 2025-01-17 12:11:11.795 [INFO][5258] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-wpfqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:11.820897 containerd[1471]: time="2025-01-17T12:11:11.820795454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:11.820897 containerd[1471]: time="2025-01-17T12:11:11.820866048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:11.820897 containerd[1471]: time="2025-01-17T12:11:11.820884013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:11.821108 containerd[1471]: time="2025-01-17T12:11:11.820977551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:11.844759 systemd[1]: Started cri-containerd-1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34.scope - libcontainer container 1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34. Jan 17 12:11:11.858485 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:11:11.884539 containerd[1471]: time="2025-01-17T12:11:11.884491915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8975f968-wpfqg,Uid:c6c7969a-d094-4962-9f3d-83a3ce21e375,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34\"" Jan 17 12:11:12.315828 containerd[1471]: time="2025-01-17T12:11:12.315315902Z" level=info msg="StopPodSandbox for \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\"" Jan 17 12:11:12.315828 containerd[1471]: time="2025-01-17T12:11:12.315732445Z" level=info msg="StopPodSandbox for \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\"" Jan 17 12:11:12.401288 containerd[1471]: 2025-01-17 12:11:12.364 [INFO][5366] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:11:12.401288 containerd[1471]: 2025-01-17 12:11:12.366 [INFO][5366] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" iface="eth0" netns="/var/run/netns/cni-6ace3d58-c28e-c616-1ab4-cbdef3f45048" Jan 17 12:11:12.401288 containerd[1471]: 2025-01-17 12:11:12.366 [INFO][5366] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" iface="eth0" netns="/var/run/netns/cni-6ace3d58-c28e-c616-1ab4-cbdef3f45048" Jan 17 12:11:12.401288 containerd[1471]: 2025-01-17 12:11:12.366 [INFO][5366] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" iface="eth0" netns="/var/run/netns/cni-6ace3d58-c28e-c616-1ab4-cbdef3f45048" Jan 17 12:11:12.401288 containerd[1471]: 2025-01-17 12:11:12.366 [INFO][5366] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:11:12.401288 containerd[1471]: 2025-01-17 12:11:12.366 [INFO][5366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:11:12.401288 containerd[1471]: 2025-01-17 12:11:12.389 [INFO][5382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" HandleID="k8s-pod-network.d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Workload="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:12.401288 containerd[1471]: 2025-01-17 12:11:12.390 [INFO][5382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:12.401288 containerd[1471]: 2025-01-17 12:11:12.390 [INFO][5382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:12.401288 containerd[1471]: 2025-01-17 12:11:12.395 [WARNING][5382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" HandleID="k8s-pod-network.d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Workload="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:12.401288 containerd[1471]: 2025-01-17 12:11:12.395 [INFO][5382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" HandleID="k8s-pod-network.d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Workload="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:12.401288 containerd[1471]: 2025-01-17 12:11:12.396 [INFO][5382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:12.401288 containerd[1471]: 2025-01-17 12:11:12.398 [INFO][5366] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:11:12.404922 containerd[1471]: time="2025-01-17T12:11:12.404744925Z" level=info msg="TearDown network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\" successfully" Jan 17 12:11:12.404922 containerd[1471]: time="2025-01-17T12:11:12.404797416Z" level=info msg="StopPodSandbox for \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\" returns successfully" Jan 17 12:11:12.405527 systemd[1]: run-netns-cni\x2d6ace3d58\x2dc28e\x2dc616\x2d1ab4\x2dcbdef3f45048.mount: Deactivated successfully. Jan 17 12:11:12.405977 containerd[1471]: time="2025-01-17T12:11:12.405785849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d754b8cc8-79mq6,Uid:09d6eb3a-b020-453e-a3b2-1c2857fad614,Namespace:calico-system,Attempt:1,}" Jan 17 12:11:12.408895 containerd[1471]: 2025-01-17 12:11:12.366 [INFO][5365] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:11:12.408895 containerd[1471]: 2025-01-17 12:11:12.366 [INFO][5365] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" iface="eth0" netns="/var/run/netns/cni-45e9d895-e8d8-fdf2-f74d-261cfd727c58" Jan 17 12:11:12.408895 containerd[1471]: 2025-01-17 12:11:12.366 [INFO][5365] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" iface="eth0" netns="/var/run/netns/cni-45e9d895-e8d8-fdf2-f74d-261cfd727c58" Jan 17 12:11:12.408895 containerd[1471]: 2025-01-17 12:11:12.366 [INFO][5365] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" iface="eth0" netns="/var/run/netns/cni-45e9d895-e8d8-fdf2-f74d-261cfd727c58" Jan 17 12:11:12.408895 containerd[1471]: 2025-01-17 12:11:12.366 [INFO][5365] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:11:12.408895 containerd[1471]: 2025-01-17 12:11:12.366 [INFO][5365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:11:12.408895 containerd[1471]: 2025-01-17 12:11:12.390 [INFO][5383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" HandleID="k8s-pod-network.fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Workload="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:12.408895 containerd[1471]: 2025-01-17 12:11:12.390 [INFO][5383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:12.408895 containerd[1471]: 2025-01-17 12:11:12.396 [INFO][5383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:12.408895 containerd[1471]: 2025-01-17 12:11:12.401 [WARNING][5383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" HandleID="k8s-pod-network.fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Workload="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:12.408895 containerd[1471]: 2025-01-17 12:11:12.401 [INFO][5383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" HandleID="k8s-pod-network.fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Workload="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:12.408895 containerd[1471]: 2025-01-17 12:11:12.403 [INFO][5383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:12.408895 containerd[1471]: 2025-01-17 12:11:12.406 [INFO][5365] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:11:12.409799 containerd[1471]: time="2025-01-17T12:11:12.409655222Z" level=info msg="TearDown network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\" successfully" Jan 17 12:11:12.409799 containerd[1471]: time="2025-01-17T12:11:12.409688255Z" level=info msg="StopPodSandbox for \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\" returns successfully" Jan 17 12:11:12.410687 kubelet[2607]: E0117 12:11:12.410639 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:12.411119 containerd[1471]: time="2025-01-17T12:11:12.411081851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4grjz,Uid:cf1e6aaa-53c2-4de6-a445-b92ba78d0548,Namespace:kube-system,Attempt:1,}" Jan 17 12:11:12.412343 systemd[1]: run-netns-cni\x2d45e9d895\x2de8d8\x2dfdf2\x2df74d\x2d261cfd727c58.mount: Deactivated successfully. Jan 17 12:11:12.590475 systemd-networkd[1409]: caliaac9e41ea12: Link UP Jan 17 12:11:12.591012 systemd-networkd[1409]: caliaac9e41ea12: Gained carrier Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.514 [INFO][5400] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0 calico-kube-controllers-6d754b8cc8- calico-system 09d6eb3a-b020-453e-a3b2-1c2857fad614 1164 0 2025-01-17 12:10:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d754b8cc8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d754b8cc8-79mq6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaac9e41ea12 [] []}} ContainerID="c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" Namespace="calico-system" Pod="calico-kube-controllers-6d754b8cc8-79mq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-" Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.514 [INFO][5400] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" Namespace="calico-system" Pod="calico-kube-controllers-6d754b8cc8-79mq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.547 [INFO][5429] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" HandleID="k8s-pod-network.c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" Workload="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.557 [INFO][5429] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" HandleID="k8s-pod-network.c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" Workload="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003acd10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d754b8cc8-79mq6", "timestamp":"2025-01-17 12:11:12.547680299 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.557 [INFO][5429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.558 [INFO][5429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.558 [INFO][5429] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.559 [INFO][5429] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" host="localhost" Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.563 [INFO][5429] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.569 [INFO][5429] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.571 [INFO][5429] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.573 [INFO][5429] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.573 [INFO][5429] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" host="localhost" Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.574 [INFO][5429] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5 Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.578 [INFO][5429] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" host="localhost" Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.584 [INFO][5429] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" host="localhost" Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.584 [INFO][5429] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" host="localhost" Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.584 [INFO][5429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:12.607147 containerd[1471]: 2025-01-17 12:11:12.584 [INFO][5429] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" HandleID="k8s-pod-network.c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" Workload="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:12.608036 containerd[1471]: 2025-01-17 12:11:12.587 [INFO][5400] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" Namespace="calico-system" Pod="calico-kube-controllers-6d754b8cc8-79mq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0", GenerateName:"calico-kube-controllers-6d754b8cc8-", Namespace:"calico-system", SelfLink:"", UID:"09d6eb3a-b020-453e-a3b2-1c2857fad614", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d754b8cc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d754b8cc8-79mq6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaac9e41ea12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:12.608036 containerd[1471]: 2025-01-17 12:11:12.587 [INFO][5400] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" Namespace="calico-system" Pod="calico-kube-controllers-6d754b8cc8-79mq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:12.608036 containerd[1471]: 2025-01-17 12:11:12.587 [INFO][5400] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaac9e41ea12 ContainerID="c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" Namespace="calico-system" Pod="calico-kube-controllers-6d754b8cc8-79mq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:12.608036 containerd[1471]: 2025-01-17 12:11:12.591 [INFO][5400] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" Namespace="calico-system" Pod="calico-kube-controllers-6d754b8cc8-79mq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:12.608036 containerd[1471]: 2025-01-17 12:11:12.591 [INFO][5400] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" Namespace="calico-system" Pod="calico-kube-controllers-6d754b8cc8-79mq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0", GenerateName:"calico-kube-controllers-6d754b8cc8-", Namespace:"calico-system", SelfLink:"", UID:"09d6eb3a-b020-453e-a3b2-1c2857fad614", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d754b8cc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5", Pod:"calico-kube-controllers-6d754b8cc8-79mq6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaac9e41ea12", MAC:"b2:82:bb:a2:87:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:12.608036 containerd[1471]: 2025-01-17 12:11:12.604 [INFO][5400] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5" Namespace="calico-system" Pod="calico-kube-controllers-6d754b8cc8-79mq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:12.637412 systemd-networkd[1409]: cali90580dcda42: Link UP Jan 17 12:11:12.637723 systemd-networkd[1409]: cali90580dcda42: Gained carrier Jan 17 12:11:12.644872 containerd[1471]: time="2025-01-17T12:11:12.640437695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:12.644872 containerd[1471]: time="2025-01-17T12:11:12.644129260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:12.644872 containerd[1471]: time="2025-01-17T12:11:12.644157343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:12.644872 containerd[1471]: time="2025-01-17T12:11:12.644302229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.524 [INFO][5413] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0 coredns-7db6d8ff4d- kube-system cf1e6aaa-53c2-4de6-a445-b92ba78d0548 1165 0 2025-01-17 12:09:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-4grjz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali90580dcda42 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4grjz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4grjz-" Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.525 [INFO][5413] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4grjz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.566 [INFO][5434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" HandleID="k8s-pod-network.15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" Workload="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.573 [INFO][5434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" HandleID="k8s-pod-network.15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" Workload="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003653c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-4grjz", "timestamp":"2025-01-17 12:11:12.566279962 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.574 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.585 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.585 [INFO][5434] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.587 [INFO][5434] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" host="localhost" Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.593 [INFO][5434] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.597 [INFO][5434] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.598 [INFO][5434] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.600 [INFO][5434] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.600 [INFO][5434] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" host="localhost" Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.603 [INFO][5434] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90 Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.610 [INFO][5434] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" host="localhost" Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.621 [INFO][5434] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" host="localhost" Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.624 [INFO][5434] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" host="localhost" Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.624 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:12.653486 containerd[1471]: 2025-01-17 12:11:12.624 [INFO][5434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" HandleID="k8s-pod-network.15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" Workload="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:12.654186 containerd[1471]: 2025-01-17 12:11:12.632 [INFO][5413] cni-plugin/k8s.go 386: Populated endpoint ContainerID="15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4grjz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf1e6aaa-53c2-4de6-a445-b92ba78d0548", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-4grjz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90580dcda42", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:12.654186 containerd[1471]: 2025-01-17 12:11:12.633 [INFO][5413] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4grjz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:12.654186 containerd[1471]: 2025-01-17 12:11:12.633 [INFO][5413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90580dcda42 ContainerID="15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4grjz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:12.654186 containerd[1471]: 2025-01-17 12:11:12.636 [INFO][5413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4grjz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:12.654186 containerd[1471]: 2025-01-17 12:11:12.639 [INFO][5413] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4grjz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf1e6aaa-53c2-4de6-a445-b92ba78d0548", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90", Pod:"coredns-7db6d8ff4d-4grjz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90580dcda42", MAC:"6e:0c:d4:34:e8:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:12.654186 containerd[1471]: 2025-01-17 12:11:12.648 [INFO][5413] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4grjz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:12.668796 systemd[1]: Started cri-containerd-c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5.scope - libcontainer container c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5. Jan 17 12:11:12.680615 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:11:12.711453 containerd[1471]: time="2025-01-17T12:11:12.711314105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d754b8cc8-79mq6,Uid:09d6eb3a-b020-453e-a3b2-1c2857fad614,Namespace:calico-system,Attempt:1,} returns sandbox id \"c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5\"" Jan 17 12:11:12.714139 containerd[1471]: time="2025-01-17T12:11:12.714003781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:12.714139 containerd[1471]: time="2025-01-17T12:11:12.714059627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:12.714139 containerd[1471]: time="2025-01-17T12:11:12.714078814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:12.714258 containerd[1471]: time="2025-01-17T12:11:12.714161381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:12.742825 systemd[1]: Started cri-containerd-15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90.scope - libcontainer container 15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90. Jan 17 12:11:12.755784 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:11:12.785097 containerd[1471]: time="2025-01-17T12:11:12.785049273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4grjz,Uid:cf1e6aaa-53c2-4de6-a445-b92ba78d0548,Namespace:kube-system,Attempt:1,} returns sandbox id \"15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90\"" Jan 17 12:11:12.786099 kubelet[2607]: E0117 12:11:12.786069 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:12.789312 containerd[1471]: time="2025-01-17T12:11:12.789266429Z" level=info msg="CreateContainer within sandbox \"15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:11:12.813119 containerd[1471]: time="2025-01-17T12:11:12.813067675Z" level=info msg="CreateContainer within sandbox \"15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"058543fa4c5d9d6c03243c69468b4ac233c3539503bf66a159d5c0b344859e27\"" Jan 17 12:11:12.814088 containerd[1471]: time="2025-01-17T12:11:12.814050828Z" level=info msg="StartContainer for \"058543fa4c5d9d6c03243c69468b4ac233c3539503bf66a159d5c0b344859e27\"" Jan 17 12:11:12.848408 containerd[1471]: time="2025-01-17T12:11:12.848291326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:12.848760 systemd[1]: Started cri-containerd-058543fa4c5d9d6c03243c69468b4ac233c3539503bf66a159d5c0b344859e27.scope - libcontainer container 058543fa4c5d9d6c03243c69468b4ac233c3539503bf66a159d5c0b344859e27. Jan 17 12:11:12.850233 containerd[1471]: time="2025-01-17T12:11:12.850165688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:11:12.851606 containerd[1471]: time="2025-01-17T12:11:12.851548632Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:12.858148 containerd[1471]: time="2025-01-17T12:11:12.857813942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:12.859622 containerd[1471]: time="2025-01-17T12:11:12.859022725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.816326176s" Jan 17 12:11:12.859622 containerd[1471]: time="2025-01-17T12:11:12.859073632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:11:12.861623 containerd[1471]: time="2025-01-17T12:11:12.861539371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:11:12.862546 containerd[1471]: time="2025-01-17T12:11:12.862470645Z" level=info msg="CreateContainer within sandbox \"6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:11:12.880199 containerd[1471]: time="2025-01-17T12:11:12.879923262Z" level=info msg="CreateContainer within sandbox \"6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"55f92341e25cd8bceacd92fe0ccc51658fb21a9cfff91670c76b638dd18bf513\"" Jan 17 12:11:12.881192 containerd[1471]: time="2025-01-17T12:11:12.881155491Z" level=info msg="StartContainer for \"55f92341e25cd8bceacd92fe0ccc51658fb21a9cfff91670c76b638dd18bf513\"" Jan 17 12:11:12.884573 containerd[1471]: time="2025-01-17T12:11:12.884535762Z" level=info msg="StartContainer for \"058543fa4c5d9d6c03243c69468b4ac233c3539503bf66a159d5c0b344859e27\" returns successfully" Jan 17 12:11:12.915164 systemd[1]: Started cri-containerd-55f92341e25cd8bceacd92fe0ccc51658fb21a9cfff91670c76b638dd18bf513.scope - libcontainer container 55f92341e25cd8bceacd92fe0ccc51658fb21a9cfff91670c76b638dd18bf513. Jan 17 12:11:12.955050 containerd[1471]: time="2025-01-17T12:11:12.954979698Z" level=info msg="StartContainer for \"55f92341e25cd8bceacd92fe0ccc51658fb21a9cfff91670c76b638dd18bf513\" returns successfully" Jan 17 12:11:13.398496 kubelet[2607]: I0117 12:11:13.398415 2607 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:11:13.398496 kubelet[2607]: I0117 12:11:13.398462 2607 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:11:13.591329 kubelet[2607]: E0117 12:11:13.590233 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:13.600767 kubelet[2607]: I0117 12:11:13.600643 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4grjz" podStartSLOduration=76.600621617 podStartE2EDuration="1m16.600621617s" podCreationTimestamp="2025-01-17 12:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:11:13.599680815 +0000 UTC m=+91.387176589" watchObservedRunningTime="2025-01-17 12:11:13.600621617 +0000 UTC m=+91.388117382" Jan 17 12:11:13.611367 kubelet[2607]: I0117 12:11:13.610854 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xfsj8" podStartSLOduration=66.646843579 podStartE2EDuration="1m10.610835756s" podCreationTimestamp="2025-01-17 12:10:03 +0000 UTC" firstStartedPulling="2025-01-17 12:11:08.896664031 +0000 UTC m=+86.684159785" lastFinishedPulling="2025-01-17 12:11:12.860656208 +0000 UTC m=+90.648151962" observedRunningTime="2025-01-17 12:11:13.610321116 +0000 UTC m=+91.397816870" watchObservedRunningTime="2025-01-17 12:11:13.610835756 +0000 UTC m=+91.398331500" Jan 17 12:11:13.666850 systemd-networkd[1409]: cali022a8eb8e5f: Gained IPv6LL Jan 17 12:11:13.793808 systemd-networkd[1409]: cali90580dcda42: Gained IPv6LL Jan 17 12:11:14.114081 systemd-networkd[1409]: caliaac9e41ea12: Gained IPv6LL Jan 17 12:11:14.598051 kubelet[2607]: E0117 12:11:14.598019 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:14.723623 systemd[1]: Started sshd@21-10.0.0.49:22-10.0.0.1:55504.service - OpenSSH per-connection server daemon (10.0.0.1:55504). Jan 17 12:11:14.769126 sshd[5645]: Accepted publickey for core from 10.0.0.1 port 55504 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:14.771570 sshd[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:14.778394 systemd-logind[1458]: New session 22 of user core. Jan 17 12:11:14.784643 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:11:14.905037 containerd[1471]: time="2025-01-17T12:11:14.904853932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:14.905725 containerd[1471]: time="2025-01-17T12:11:14.905528116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:11:14.907025 containerd[1471]: time="2025-01-17T12:11:14.906980583Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:14.909450 containerd[1471]: time="2025-01-17T12:11:14.909407355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:14.911048 containerd[1471]: time="2025-01-17T12:11:14.910611057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.049042051s" Jan 17 12:11:14.911048 containerd[1471]: time="2025-01-17T12:11:14.910651043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:11:14.912361 containerd[1471]: time="2025-01-17T12:11:14.912325883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:11:14.916327 containerd[1471]: time="2025-01-17T12:11:14.916270546Z" level=info msg="CreateContainer within sandbox \"1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:11:14.934276 containerd[1471]: time="2025-01-17T12:11:14.934211255Z" level=info msg="CreateContainer within sandbox \"1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3e4304130732ac2423741fbdbc9c57a168c6b5c8750b6f51045b079d3398ce9c\"" Jan 17 12:11:14.935157 containerd[1471]: time="2025-01-17T12:11:14.935109297Z" level=info msg="StartContainer for \"3e4304130732ac2423741fbdbc9c57a168c6b5c8750b6f51045b079d3398ce9c\"" Jan 17 12:11:14.963485 sshd[5645]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:14.969794 systemd[1]: Started cri-containerd-3e4304130732ac2423741fbdbc9c57a168c6b5c8750b6f51045b079d3398ce9c.scope - libcontainer container 3e4304130732ac2423741fbdbc9c57a168c6b5c8750b6f51045b079d3398ce9c. Jan 17 12:11:14.970375 systemd[1]: sshd@21-10.0.0.49:22-10.0.0.1:55504.service: Deactivated successfully. Jan 17 12:11:14.973085 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:11:14.975422 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:11:14.976541 systemd-logind[1458]: Removed session 22. Jan 17 12:11:15.013568 containerd[1471]: time="2025-01-17T12:11:15.013517096Z" level=info msg="StartContainer for \"3e4304130732ac2423741fbdbc9c57a168c6b5c8750b6f51045b079d3398ce9c\" returns successfully" Jan 17 12:11:15.314457 kubelet[2607]: E0117 12:11:15.314417 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:15.602115 kubelet[2607]: E0117 12:11:15.601947 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:16.315177 containerd[1471]: time="2025-01-17T12:11:16.315118996Z" level=info msg="StopPodSandbox for \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\"" Jan 17 12:11:17.120679 kubelet[2607]: I0117 12:11:17.120569 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c8975f968-wpfqg" podStartSLOduration=71.094767311 podStartE2EDuration="1m14.120546493s" podCreationTimestamp="2025-01-17 12:10:03 +0000 UTC" firstStartedPulling="2025-01-17 12:11:11.886046268 +0000 UTC m=+89.673542022" lastFinishedPulling="2025-01-17 12:11:14.91182545 +0000 UTC m=+92.699321204" observedRunningTime="2025-01-17 12:11:15.614510141 +0000 UTC m=+93.402005895" watchObservedRunningTime="2025-01-17 12:11:17.120546493 +0000 UTC m=+94.908042247" Jan 17 12:11:17.256962 containerd[1471]: 2025-01-17 12:11:17.214 [INFO][5735] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:11:17.256962 containerd[1471]: 2025-01-17 12:11:17.215 [INFO][5735] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" iface="eth0" netns="/var/run/netns/cni-277eda9f-997a-5619-3ba9-4ecae9f3cac1" Jan 17 12:11:17.256962 containerd[1471]: 2025-01-17 12:11:17.216 [INFO][5735] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" iface="eth0" netns="/var/run/netns/cni-277eda9f-997a-5619-3ba9-4ecae9f3cac1" Jan 17 12:11:17.256962 containerd[1471]: 2025-01-17 12:11:17.216 [INFO][5735] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" iface="eth0" netns="/var/run/netns/cni-277eda9f-997a-5619-3ba9-4ecae9f3cac1" Jan 17 12:11:17.256962 containerd[1471]: 2025-01-17 12:11:17.216 [INFO][5735] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:11:17.256962 containerd[1471]: 2025-01-17 12:11:17.216 [INFO][5735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:11:17.256962 containerd[1471]: 2025-01-17 12:11:17.242 [INFO][5743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" HandleID="k8s-pod-network.6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Workload="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:17.256962 containerd[1471]: 2025-01-17 12:11:17.242 [INFO][5743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:17.256962 containerd[1471]: 2025-01-17 12:11:17.242 [INFO][5743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:17.256962 containerd[1471]: 2025-01-17 12:11:17.248 [WARNING][5743] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" HandleID="k8s-pod-network.6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Workload="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:17.256962 containerd[1471]: 2025-01-17 12:11:17.248 [INFO][5743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" HandleID="k8s-pod-network.6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Workload="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:17.256962 containerd[1471]: 2025-01-17 12:11:17.250 [INFO][5743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:17.256962 containerd[1471]: 2025-01-17 12:11:17.254 [INFO][5735] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:11:17.260651 containerd[1471]: time="2025-01-17T12:11:17.258876697Z" level=info msg="TearDown network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\" successfully" Jan 17 12:11:17.260651 containerd[1471]: time="2025-01-17T12:11:17.258913307Z" level=info msg="StopPodSandbox for \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\" returns successfully" Jan 17 12:11:17.260651 containerd[1471]: time="2025-01-17T12:11:17.259967945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8975f968-4wqpn,Uid:50e92775-825e-4d1d-9a42-956f2281a0b9,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:11:17.260247 systemd[1]: run-netns-cni\x2d277eda9f\x2d997a\x2d5619\x2d3ba9\x2d4ecae9f3cac1.mount: Deactivated successfully. Jan 17 12:11:17.315349 containerd[1471]: time="2025-01-17T12:11:17.315300916Z" level=info msg="StopPodSandbox for \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\"" Jan 17 12:11:17.926968 containerd[1471]: 2025-01-17 12:11:17.653 [INFO][5767] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:11:17.926968 containerd[1471]: 2025-01-17 12:11:17.654 [INFO][5767] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" iface="eth0" netns="/var/run/netns/cni-a4722225-20ba-f446-7745-e951f126d0da" Jan 17 12:11:17.926968 containerd[1471]: 2025-01-17 12:11:17.655 [INFO][5767] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" iface="eth0" netns="/var/run/netns/cni-a4722225-20ba-f446-7745-e951f126d0da" Jan 17 12:11:17.926968 containerd[1471]: 2025-01-17 12:11:17.655 [INFO][5767] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" iface="eth0" netns="/var/run/netns/cni-a4722225-20ba-f446-7745-e951f126d0da" Jan 17 12:11:17.926968 containerd[1471]: 2025-01-17 12:11:17.655 [INFO][5767] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:11:17.926968 containerd[1471]: 2025-01-17 12:11:17.655 [INFO][5767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:11:17.926968 containerd[1471]: 2025-01-17 12:11:17.913 [INFO][5774] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" HandleID="k8s-pod-network.5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Workload="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:17.926968 containerd[1471]: 2025-01-17 12:11:17.913 [INFO][5774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:17.926968 containerd[1471]: 2025-01-17 12:11:17.913 [INFO][5774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:17.926968 containerd[1471]: 2025-01-17 12:11:17.918 [WARNING][5774] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" HandleID="k8s-pod-network.5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Workload="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:17.926968 containerd[1471]: 2025-01-17 12:11:17.918 [INFO][5774] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" HandleID="k8s-pod-network.5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Workload="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:17.926968 containerd[1471]: 2025-01-17 12:11:17.920 [INFO][5774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:17.926968 containerd[1471]: 2025-01-17 12:11:17.922 [INFO][5767] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:11:17.927581 containerd[1471]: time="2025-01-17T12:11:17.927209812Z" level=info msg="TearDown network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\" successfully" Jan 17 12:11:17.927581 containerd[1471]: time="2025-01-17T12:11:17.927247814Z" level=info msg="StopPodSandbox for \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\" returns successfully" Jan 17 12:11:17.927790 kubelet[2607]: E0117 12:11:17.927758 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:17.930772 containerd[1471]: time="2025-01-17T12:11:17.930732726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hmksg,Uid:7d963b6e-e967-461b-88c1-043d231c7107,Namespace:kube-system,Attempt:1,}" Jan 17 12:11:17.930871 systemd[1]: run-netns-cni\x2da4722225\x2d20ba\x2df446\x2d7745\x2de951f126d0da.mount: Deactivated successfully. Jan 17 12:11:18.201978 containerd[1471]: time="2025-01-17T12:11:18.201913020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:18.213562 containerd[1471]: time="2025-01-17T12:11:18.213496641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:11:18.218259 containerd[1471]: time="2025-01-17T12:11:18.217719246Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:18.225028 containerd[1471]: time="2025-01-17T12:11:18.224424114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:18.226406 containerd[1471]: time="2025-01-17T12:11:18.226322677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.313946488s" Jan 17 12:11:18.226406 containerd[1471]: time="2025-01-17T12:11:18.226368133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:11:18.238339 containerd[1471]: time="2025-01-17T12:11:18.237694946Z" level=info msg="CreateContainer within sandbox \"c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:11:18.261895 containerd[1471]: time="2025-01-17T12:11:18.261083920Z" level=info msg="CreateContainer within sandbox \"c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5bc7b5f22090683c5b47fb024814bf655a5c9b4c69467fcd733fce33cef8ffc7\"" Jan 17 12:11:18.263184 containerd[1471]: time="2025-01-17T12:11:18.262697702Z" level=info msg="StartContainer for \"5bc7b5f22090683c5b47fb024814bf655a5c9b4c69467fcd733fce33cef8ffc7\"" Jan 17 12:11:18.309761 systemd[1]: Started cri-containerd-5bc7b5f22090683c5b47fb024814bf655a5c9b4c69467fcd733fce33cef8ffc7.scope - libcontainer container 5bc7b5f22090683c5b47fb024814bf655a5c9b4c69467fcd733fce33cef8ffc7. Jan 17 12:11:18.362168 systemd-networkd[1409]: caliaa81a242015: Link UP Jan 17 12:11:18.363666 systemd-networkd[1409]: caliaa81a242015: Gained carrier Jan 17 12:11:18.366224 containerd[1471]: time="2025-01-17T12:11:18.366115235Z" level=info msg="StartContainer for \"5bc7b5f22090683c5b47fb024814bf655a5c9b4c69467fcd733fce33cef8ffc7\" returns successfully" Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.255 [INFO][5783] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0 calico-apiserver-c8975f968- calico-apiserver 50e92775-825e-4d1d-9a42-956f2281a0b9 1225 0 2025-01-17 12:10:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c8975f968 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c8975f968-4wqpn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaa81a242015 [] []}} ContainerID="8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-4wqpn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--4wqpn-" Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.256 [INFO][5783] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-4wqpn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.306 [INFO][5814] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" HandleID="k8s-pod-network.8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" Workload="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.319 [INFO][5814] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" HandleID="k8s-pod-network.8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" Workload="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291570), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c8975f968-4wqpn", "timestamp":"2025-01-17 12:11:18.306079369 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.319 [INFO][5814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.320 [INFO][5814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.320 [INFO][5814] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.322 [INFO][5814] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" host="localhost" Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.327 [INFO][5814] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.331 [INFO][5814] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.333 [INFO][5814] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.338 [INFO][5814] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.338 [INFO][5814] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" host="localhost" Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.339 [INFO][5814] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142 Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.344 [INFO][5814] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" host="localhost" Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.354 [INFO][5814] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" host="localhost" Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.354 [INFO][5814] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" host="localhost" Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.354 [INFO][5814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:18.384335 containerd[1471]: 2025-01-17 12:11:18.354 [INFO][5814] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" HandleID="k8s-pod-network.8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" Workload="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:18.385158 containerd[1471]: 2025-01-17 12:11:18.356 [INFO][5783] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-4wqpn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0", GenerateName:"calico-apiserver-c8975f968-", Namespace:"calico-apiserver", SelfLink:"", UID:"50e92775-825e-4d1d-9a42-956f2281a0b9", ResourceVersion:"1225", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8975f968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c8975f968-4wqpn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa81a242015", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:18.385158 containerd[1471]: 2025-01-17 12:11:18.356 [INFO][5783] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-4wqpn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:18.385158 containerd[1471]: 2025-01-17 12:11:18.357 [INFO][5783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa81a242015 ContainerID="8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-4wqpn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:18.385158 containerd[1471]: 2025-01-17 12:11:18.364 [INFO][5783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-4wqpn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:18.385158 containerd[1471]: 2025-01-17 12:11:18.364 [INFO][5783] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-4wqpn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0", GenerateName:"calico-apiserver-c8975f968-", Namespace:"calico-apiserver", SelfLink:"", UID:"50e92775-825e-4d1d-9a42-956f2281a0b9", ResourceVersion:"1225", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8975f968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142", Pod:"calico-apiserver-c8975f968-4wqpn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa81a242015", MAC:"f6:69:f4:d6:73:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:18.385158 containerd[1471]: 2025-01-17 12:11:18.381 [INFO][5783] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142" Namespace="calico-apiserver" Pod="calico-apiserver-c8975f968-4wqpn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:18.455450 containerd[1471]: time="2025-01-17T12:11:18.455270020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:18.455450 containerd[1471]: time="2025-01-17T12:11:18.455376843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:18.455450 containerd[1471]: time="2025-01-17T12:11:18.455405097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:18.455662 containerd[1471]: time="2025-01-17T12:11:18.455532218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:18.484720 systemd[1]: Started cri-containerd-8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142.scope - libcontainer container 8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142. Jan 17 12:11:18.495629 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:11:18.518796 containerd[1471]: time="2025-01-17T12:11:18.518750030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8975f968-4wqpn,Uid:50e92775-825e-4d1d-9a42-956f2281a0b9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142\"" Jan 17 12:11:18.521309 containerd[1471]: time="2025-01-17T12:11:18.521287047Z" level=info msg="CreateContainer within sandbox \"8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:11:18.540876 systemd-networkd[1409]: calibd790bff320: Link UP Jan 17 12:11:18.541084 systemd-networkd[1409]: calibd790bff320: Gained carrier Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.278 [INFO][5794] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0 coredns-7db6d8ff4d- kube-system 7d963b6e-e967-461b-88c1-043d231c7107 1231 0 2025-01-17 12:09:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-hmksg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibd790bff320 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmksg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hmksg-" Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.281 [INFO][5794] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmksg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.332 [INFO][5834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" HandleID="k8s-pod-network.4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" Workload="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.340 [INFO][5834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" HandleID="k8s-pod-network.4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" Workload="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000289a10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-hmksg", "timestamp":"2025-01-17 12:11:18.332222724 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.340 [INFO][5834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.354 [INFO][5834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.354 [INFO][5834] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.357 [INFO][5834] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" host="localhost" Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.365 [INFO][5834] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.372 [INFO][5834] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.376 [INFO][5834] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.386 [INFO][5834] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.386 [INFO][5834] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" host="localhost" Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.388 [INFO][5834] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.402 [INFO][5834] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" host="localhost" Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.534 [INFO][5834] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" host="localhost" Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.534 [INFO][5834] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" host="localhost" Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.534 [INFO][5834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:18.557374 containerd[1471]: 2025-01-17 12:11:18.534 [INFO][5834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" HandleID="k8s-pod-network.4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" Workload="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:18.558223 containerd[1471]: 2025-01-17 12:11:18.537 [INFO][5794] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmksg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7d963b6e-e967-461b-88c1-043d231c7107", ResourceVersion:"1231", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-hmksg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd790bff320", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:18.558223 containerd[1471]: 2025-01-17 12:11:18.538 [INFO][5794] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmksg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:18.558223 containerd[1471]: 2025-01-17 12:11:18.538 [INFO][5794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd790bff320 ContainerID="4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmksg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:18.558223 containerd[1471]: 2025-01-17 12:11:18.540 [INFO][5794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmksg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:18.558223 containerd[1471]: 2025-01-17 12:11:18.540 [INFO][5794] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmksg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7d963b6e-e967-461b-88c1-043d231c7107", ResourceVersion:"1231", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b", Pod:"coredns-7db6d8ff4d-hmksg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd790bff320", MAC:"82:98:77:90:dd:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:18.558223 containerd[1471]: 2025-01-17 12:11:18.554 [INFO][5794] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmksg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:18.561654 containerd[1471]: time="2025-01-17T12:11:18.561562744Z" level=info msg="CreateContainer within sandbox \"8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7a22a8b48bd8d47403a271ab8e6f909e36924b36a36e48961258ef6cb7e0d620\"" Jan 17 12:11:18.564244 containerd[1471]: time="2025-01-17T12:11:18.562754592Z" level=info msg="StartContainer for \"7a22a8b48bd8d47403a271ab8e6f909e36924b36a36e48961258ef6cb7e0d620\"" Jan 17 12:11:18.596051 containerd[1471]: time="2025-01-17T12:11:18.595948023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:18.596051 containerd[1471]: time="2025-01-17T12:11:18.596020210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:18.596424 containerd[1471]: time="2025-01-17T12:11:18.596047102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:18.596424 containerd[1471]: time="2025-01-17T12:11:18.596162551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:18.609190 systemd[1]: Started cri-containerd-7a22a8b48bd8d47403a271ab8e6f909e36924b36a36e48961258ef6cb7e0d620.scope - libcontainer container 7a22a8b48bd8d47403a271ab8e6f909e36924b36a36e48961258ef6cb7e0d620. Jan 17 12:11:18.627066 systemd[1]: Started cri-containerd-4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b.scope - libcontainer container 4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b. Jan 17 12:11:18.650691 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:11:18.686717 containerd[1471]: time="2025-01-17T12:11:18.686639922Z" level=info msg="StartContainer for \"7a22a8b48bd8d47403a271ab8e6f909e36924b36a36e48961258ef6cb7e0d620\" returns successfully" Jan 17 12:11:18.687558 containerd[1471]: time="2025-01-17T12:11:18.687487384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hmksg,Uid:7d963b6e-e967-461b-88c1-043d231c7107,Namespace:kube-system,Attempt:1,} returns sandbox id \"4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b\"" Jan 17 12:11:18.688689 kubelet[2607]: E0117 12:11:18.688336 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:18.691463 containerd[1471]: time="2025-01-17T12:11:18.691424987Z" level=info msg="CreateContainer within sandbox \"4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:11:18.695170 kubelet[2607]: I0117 12:11:18.694115 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d754b8cc8-79mq6" podStartSLOduration=70.17964701 podStartE2EDuration="1m15.694095689s" podCreationTimestamp="2025-01-17 12:10:03 +0000 UTC" firstStartedPulling="2025-01-17 12:11:12.713554895 +0000 UTC m=+90.501050650" lastFinishedPulling="2025-01-17 12:11:18.228003575 +0000 UTC m=+96.015499329" observedRunningTime="2025-01-17 12:11:18.663559975 +0000 UTC m=+96.451055739" watchObservedRunningTime="2025-01-17 12:11:18.694095689 +0000 UTC m=+96.481591443" Jan 17 12:11:18.720058 containerd[1471]: time="2025-01-17T12:11:18.719900791Z" level=info msg="CreateContainer within sandbox \"4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6cc8647b50ac9dc396bb7c6e060501996a5155c2b7be03f18996c7c5468d73d1\"" Jan 17 12:11:18.721333 containerd[1471]: time="2025-01-17T12:11:18.721190234Z" level=info msg="StartContainer for \"6cc8647b50ac9dc396bb7c6e060501996a5155c2b7be03f18996c7c5468d73d1\"" Jan 17 12:11:18.763786 systemd[1]: Started cri-containerd-6cc8647b50ac9dc396bb7c6e060501996a5155c2b7be03f18996c7c5468d73d1.scope - libcontainer container 6cc8647b50ac9dc396bb7c6e060501996a5155c2b7be03f18996c7c5468d73d1. Jan 17 12:11:18.802829 containerd[1471]: time="2025-01-17T12:11:18.802623336Z" level=info msg="StartContainer for \"6cc8647b50ac9dc396bb7c6e060501996a5155c2b7be03f18996c7c5468d73d1\" returns successfully" Jan 17 12:11:19.263639 systemd[1]: run-containerd-runc-k8s.io-8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142-runc.1U8wdo.mount: Deactivated successfully. Jan 17 12:11:19.618072 kubelet[2607]: E0117 12:11:19.617895 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:19.618869 systemd-networkd[1409]: caliaa81a242015: Gained IPv6LL Jan 17 12:11:19.694837 kubelet[2607]: I0117 12:11:19.694779 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c8975f968-4wqpn" podStartSLOduration=76.694758503 podStartE2EDuration="1m16.694758503s" podCreationTimestamp="2025-01-17 12:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:11:19.687838437 +0000 UTC m=+97.475334191" watchObservedRunningTime="2025-01-17 12:11:19.694758503 +0000 UTC m=+97.482254257" Jan 17 12:11:19.710460 kubelet[2607]: I0117 12:11:19.710344 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hmksg" podStartSLOduration=82.710317702 podStartE2EDuration="1m22.710317702s" podCreationTimestamp="2025-01-17 12:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:11:19.710100899 +0000 UTC m=+97.497596653" watchObservedRunningTime="2025-01-17 12:11:19.710317702 +0000 UTC m=+97.497813456" Jan 17 12:11:19.809754 systemd-networkd[1409]: calibd790bff320: Gained IPv6LL Jan 17 12:11:19.974475 systemd[1]: Started sshd@22-10.0.0.49:22-10.0.0.1:52814.service - OpenSSH per-connection server daemon (10.0.0.1:52814). Jan 17 12:11:20.026462 sshd[6091]: Accepted publickey for core from 10.0.0.1 port 52814 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:20.028221 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:20.032515 systemd-logind[1458]: New session 23 of user core. Jan 17 12:11:20.040760 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:11:20.163354 sshd[6091]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:20.167738 systemd[1]: sshd@22-10.0.0.49:22-10.0.0.1:52814.service: Deactivated successfully. Jan 17 12:11:20.169760 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:11:20.170367 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:11:20.171280 systemd-logind[1458]: Removed session 23. Jan 17 12:11:20.626783 kubelet[2607]: E0117 12:11:20.626751 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:21.628315 kubelet[2607]: E0117 12:11:21.628271 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:22.049770 systemd[1]: run-containerd-runc-k8s.io-5bc7b5f22090683c5b47fb024814bf655a5c9b4c69467fcd733fce33cef8ffc7-runc.zfCkLm.mount: Deactivated successfully. Jan 17 12:11:25.176409 systemd[1]: Started sshd@23-10.0.0.49:22-10.0.0.1:52824.service - OpenSSH per-connection server daemon (10.0.0.1:52824). Jan 17 12:11:25.217346 sshd[6136]: Accepted publickey for core from 10.0.0.1 port 52824 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:25.219346 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:25.224122 systemd-logind[1458]: New session 24 of user core. Jan 17 12:11:25.231901 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:11:25.349790 sshd[6136]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:25.364660 systemd[1]: sshd@23-10.0.0.49:22-10.0.0.1:52824.service: Deactivated successfully. Jan 17 12:11:25.366531 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:11:25.367991 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:11:25.382849 systemd[1]: Started sshd@24-10.0.0.49:22-10.0.0.1:52838.service - OpenSSH per-connection server daemon (10.0.0.1:52838). Jan 17 12:11:25.383781 systemd-logind[1458]: Removed session 24. Jan 17 12:11:25.419053 sshd[6150]: Accepted publickey for core from 10.0.0.1 port 52838 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:25.420744 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:25.425160 systemd-logind[1458]: New session 25 of user core. Jan 17 12:11:25.435733 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:11:26.062863 sshd[6150]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:26.072410 systemd[1]: sshd@24-10.0.0.49:22-10.0.0.1:52838.service: Deactivated successfully. Jan 17 12:11:26.074191 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:11:26.075964 systemd-logind[1458]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:11:26.084855 systemd[1]: Started sshd@25-10.0.0.49:22-10.0.0.1:52842.service - OpenSSH per-connection server daemon (10.0.0.1:52842). Jan 17 12:11:26.085938 systemd-logind[1458]: Removed session 25. Jan 17 12:11:26.121798 sshd[6163]: Accepted publickey for core from 10.0.0.1 port 52842 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:26.123473 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:26.128051 systemd-logind[1458]: New session 26 of user core. Jan 17 12:11:26.134811 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:11:27.957236 sshd[6163]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:27.968379 systemd[1]: sshd@25-10.0.0.49:22-10.0.0.1:52842.service: Deactivated successfully. Jan 17 12:11:27.971142 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:11:27.972963 systemd-logind[1458]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:11:27.985120 systemd[1]: Started sshd@26-10.0.0.49:22-10.0.0.1:59290.service - OpenSSH per-connection server daemon (10.0.0.1:59290). Jan 17 12:11:27.986908 systemd-logind[1458]: Removed session 26. Jan 17 12:11:28.022468 sshd[6191]: Accepted publickey for core from 10.0.0.1 port 59290 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:28.024486 sshd[6191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:28.029442 systemd-logind[1458]: New session 27 of user core. Jan 17 12:11:28.042823 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:11:28.280142 sshd[6191]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:28.290290 systemd[1]: sshd@26-10.0.0.49:22-10.0.0.1:59290.service: Deactivated successfully. Jan 17 12:11:28.292355 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:11:28.294350 systemd-logind[1458]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:11:28.298953 systemd[1]: Started sshd@27-10.0.0.49:22-10.0.0.1:59306.service - OpenSSH per-connection server daemon (10.0.0.1:59306). Jan 17 12:11:28.300156 systemd-logind[1458]: Removed session 27. Jan 17 12:11:28.315849 kubelet[2607]: E0117 12:11:28.315801 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:28.335432 sshd[6204]: Accepted publickey for core from 10.0.0.1 port 59306 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:28.337104 sshd[6204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:28.341698 systemd-logind[1458]: New session 28 of user core. Jan 17 12:11:28.355734 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 12:11:28.470011 sshd[6204]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:28.473937 systemd[1]: sshd@27-10.0.0.49:22-10.0.0.1:59306.service: Deactivated successfully. Jan 17 12:11:28.476055 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 12:11:28.476766 systemd-logind[1458]: Session 28 logged out. Waiting for processes to exit. Jan 17 12:11:28.477631 systemd-logind[1458]: Removed session 28. Jan 17 12:11:33.481926 systemd[1]: Started sshd@28-10.0.0.49:22-10.0.0.1:59318.service - OpenSSH per-connection server daemon (10.0.0.1:59318). Jan 17 12:11:33.520061 sshd[6227]: Accepted publickey for core from 10.0.0.1 port 59318 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:33.521673 sshd[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:33.525945 systemd-logind[1458]: New session 29 of user core. Jan 17 12:11:33.532729 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 12:11:33.646927 sshd[6227]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:33.650924 systemd[1]: sshd@28-10.0.0.49:22-10.0.0.1:59318.service: Deactivated successfully. Jan 17 12:11:33.653829 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 12:11:33.654890 systemd-logind[1458]: Session 29 logged out. Waiting for processes to exit. Jan 17 12:11:33.655968 systemd-logind[1458]: Removed session 29. Jan 17 12:11:34.465067 kubelet[2607]: E0117 12:11:34.464583 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:38.658913 systemd[1]: Started sshd@29-10.0.0.49:22-10.0.0.1:33764.service - OpenSSH per-connection server daemon (10.0.0.1:33764). Jan 17 12:11:38.697952 sshd[6269]: Accepted publickey for core from 10.0.0.1 port 33764 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:38.699844 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:38.705104 systemd-logind[1458]: New session 30 of user core. Jan 17 12:11:38.712880 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 17 12:11:38.826385 sshd[6269]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:38.830449 systemd[1]: sshd@29-10.0.0.49:22-10.0.0.1:33764.service: Deactivated successfully. Jan 17 12:11:38.833246 systemd[1]: session-30.scope: Deactivated successfully. Jan 17 12:11:38.834045 systemd-logind[1458]: Session 30 logged out. Waiting for processes to exit. Jan 17 12:11:38.835387 systemd-logind[1458]: Removed session 30. Jan 17 12:11:42.309406 containerd[1471]: time="2025-01-17T12:11:42.309345496Z" level=info msg="StopPodSandbox for \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\"" Jan 17 12:11:42.390140 containerd[1471]: 2025-01-17 12:11:42.347 [WARNING][6299] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xfsj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83490bae-2f03-49cc-b16c-ff7f265ed80b", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd", Pod:"csi-node-driver-xfsj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd889c8f326", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:42.390140 containerd[1471]: 2025-01-17 12:11:42.347 [INFO][6299] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:11:42.390140 containerd[1471]: 2025-01-17 12:11:42.347 [INFO][6299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" iface="eth0" netns="" Jan 17 12:11:42.390140 containerd[1471]: 2025-01-17 12:11:42.347 [INFO][6299] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:11:42.390140 containerd[1471]: 2025-01-17 12:11:42.347 [INFO][6299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:11:42.390140 containerd[1471]: 2025-01-17 12:11:42.376 [INFO][6309] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" HandleID="k8s-pod-network.da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Workload="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:42.390140 containerd[1471]: 2025-01-17 12:11:42.376 [INFO][6309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:42.390140 containerd[1471]: 2025-01-17 12:11:42.376 [INFO][6309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:42.390140 containerd[1471]: 2025-01-17 12:11:42.381 [WARNING][6309] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" HandleID="k8s-pod-network.da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Workload="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:42.390140 containerd[1471]: 2025-01-17 12:11:42.381 [INFO][6309] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" HandleID="k8s-pod-network.da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Workload="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:42.390140 containerd[1471]: 2025-01-17 12:11:42.383 [INFO][6309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:42.390140 containerd[1471]: 2025-01-17 12:11:42.387 [INFO][6299] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:11:42.390897 containerd[1471]: time="2025-01-17T12:11:42.390828679Z" level=info msg="TearDown network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\" successfully" Jan 17 12:11:42.390897 containerd[1471]: time="2025-01-17T12:11:42.390894193Z" level=info msg="StopPodSandbox for \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\" returns successfully" Jan 17 12:11:42.391894 containerd[1471]: time="2025-01-17T12:11:42.391790694Z" level=info msg="RemovePodSandbox for \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\"" Jan 17 12:11:42.394247 containerd[1471]: time="2025-01-17T12:11:42.394167218Z" level=info msg="Forcibly stopping sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\"" Jan 17 12:11:42.480509 containerd[1471]: 2025-01-17 12:11:42.433 [WARNING][6331] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xfsj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83490bae-2f03-49cc-b16c-ff7f265ed80b", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6aeca494c16323362a07c421c6c0b02bea5bf1feac88b95bde93153cbed354dd", Pod:"csi-node-driver-xfsj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd889c8f326", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:42.480509 containerd[1471]: 2025-01-17 12:11:42.433 [INFO][6331] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:11:42.480509 containerd[1471]: 2025-01-17 12:11:42.433 [INFO][6331] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" iface="eth0" netns="" Jan 17 12:11:42.480509 containerd[1471]: 2025-01-17 12:11:42.433 [INFO][6331] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:11:42.480509 containerd[1471]: 2025-01-17 12:11:42.434 [INFO][6331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:11:42.480509 containerd[1471]: 2025-01-17 12:11:42.464 [INFO][6338] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" HandleID="k8s-pod-network.da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Workload="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:42.480509 containerd[1471]: 2025-01-17 12:11:42.464 [INFO][6338] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:42.480509 containerd[1471]: 2025-01-17 12:11:42.464 [INFO][6338] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:42.480509 containerd[1471]: 2025-01-17 12:11:42.473 [WARNING][6338] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" HandleID="k8s-pod-network.da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Workload="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:42.480509 containerd[1471]: 2025-01-17 12:11:42.473 [INFO][6338] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" HandleID="k8s-pod-network.da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Workload="localhost-k8s-csi--node--driver--xfsj8-eth0" Jan 17 12:11:42.480509 containerd[1471]: 2025-01-17 12:11:42.475 [INFO][6338] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:42.480509 containerd[1471]: 2025-01-17 12:11:42.478 [INFO][6331] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce" Jan 17 12:11:42.480951 containerd[1471]: time="2025-01-17T12:11:42.480551845Z" level=info msg="TearDown network for sandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\" successfully" Jan 17 12:11:42.569331 containerd[1471]: time="2025-01-17T12:11:42.569145579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:11:42.569434 containerd[1471]: time="2025-01-17T12:11:42.569330170Z" level=info msg="RemovePodSandbox \"da929872b3d2a68fad8cce00b084109fe86652be25d846a02fbdfd75285a93ce\" returns successfully" Jan 17 12:11:42.570235 containerd[1471]: time="2025-01-17T12:11:42.570197052Z" level=info msg="StopPodSandbox for \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\"" Jan 17 12:11:42.637837 containerd[1471]: 2025-01-17 12:11:42.605 [WARNING][6360] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0", GenerateName:"calico-kube-controllers-6d754b8cc8-", Namespace:"calico-system", SelfLink:"", UID:"09d6eb3a-b020-453e-a3b2-1c2857fad614", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d754b8cc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5", Pod:"calico-kube-controllers-6d754b8cc8-79mq6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaac9e41ea12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:42.637837 containerd[1471]: 2025-01-17 12:11:42.605 [INFO][6360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:11:42.637837 containerd[1471]: 2025-01-17 12:11:42.605 [INFO][6360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" iface="eth0" netns="" Jan 17 12:11:42.637837 containerd[1471]: 2025-01-17 12:11:42.605 [INFO][6360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:11:42.637837 containerd[1471]: 2025-01-17 12:11:42.605 [INFO][6360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:11:42.637837 containerd[1471]: 2025-01-17 12:11:42.626 [INFO][6368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" HandleID="k8s-pod-network.d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Workload="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:42.637837 containerd[1471]: 2025-01-17 12:11:42.626 [INFO][6368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:42.637837 containerd[1471]: 2025-01-17 12:11:42.626 [INFO][6368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:42.637837 containerd[1471]: 2025-01-17 12:11:42.631 [WARNING][6368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" HandleID="k8s-pod-network.d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Workload="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:42.637837 containerd[1471]: 2025-01-17 12:11:42.631 [INFO][6368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" HandleID="k8s-pod-network.d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Workload="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:42.637837 containerd[1471]: 2025-01-17 12:11:42.632 [INFO][6368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:42.637837 containerd[1471]: 2025-01-17 12:11:42.635 [INFO][6360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:11:42.638538 containerd[1471]: time="2025-01-17T12:11:42.637866349Z" level=info msg="TearDown network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\" successfully" Jan 17 12:11:42.638538 containerd[1471]: time="2025-01-17T12:11:42.637897188Z" level=info msg="StopPodSandbox for \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\" returns successfully" Jan 17 12:11:42.638538 containerd[1471]: time="2025-01-17T12:11:42.638463240Z" level=info msg="RemovePodSandbox for \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\"" Jan 17 12:11:42.638538 containerd[1471]: time="2025-01-17T12:11:42.638509137Z" level=info msg="Forcibly stopping sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\"" Jan 17 12:11:42.716716 containerd[1471]: 2025-01-17 12:11:42.682 [WARNING][6391] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0", GenerateName:"calico-kube-controllers-6d754b8cc8-", Namespace:"calico-system", SelfLink:"", UID:"09d6eb3a-b020-453e-a3b2-1c2857fad614", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d754b8cc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6930b3fc55f78d682f7af010bca6b55f7dfa9cb7c84927c86b2092d823902f5", Pod:"calico-kube-controllers-6d754b8cc8-79mq6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaac9e41ea12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:42.716716 containerd[1471]: 2025-01-17 12:11:42.683 [INFO][6391] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:11:42.716716 containerd[1471]: 2025-01-17 12:11:42.683 [INFO][6391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" iface="eth0" netns="" Jan 17 12:11:42.716716 containerd[1471]: 2025-01-17 12:11:42.683 [INFO][6391] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:11:42.716716 containerd[1471]: 2025-01-17 12:11:42.683 [INFO][6391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:11:42.716716 containerd[1471]: 2025-01-17 12:11:42.705 [INFO][6399] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" HandleID="k8s-pod-network.d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Workload="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:42.716716 containerd[1471]: 2025-01-17 12:11:42.706 [INFO][6399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:42.716716 containerd[1471]: 2025-01-17 12:11:42.706 [INFO][6399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:42.716716 containerd[1471]: 2025-01-17 12:11:42.710 [WARNING][6399] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" HandleID="k8s-pod-network.d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Workload="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:42.716716 containerd[1471]: 2025-01-17 12:11:42.710 [INFO][6399] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" HandleID="k8s-pod-network.d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Workload="localhost-k8s-calico--kube--controllers--6d754b8cc8--79mq6-eth0" Jan 17 12:11:42.716716 containerd[1471]: 2025-01-17 12:11:42.712 [INFO][6399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:42.716716 containerd[1471]: 2025-01-17 12:11:42.714 [INFO][6391] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965" Jan 17 12:11:42.717290 containerd[1471]: time="2025-01-17T12:11:42.716746368Z" level=info msg="TearDown network for sandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\" successfully" Jan 17 12:11:42.728415 containerd[1471]: time="2025-01-17T12:11:42.728363812Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:11:42.728495 containerd[1471]: time="2025-01-17T12:11:42.728428865Z" level=info msg="RemovePodSandbox \"d5864238f8f0283ac2fc90fddb74283b11572fb86422423fa5e74a83fe919965\" returns successfully" Jan 17 12:11:42.729006 containerd[1471]: time="2025-01-17T12:11:42.728982475Z" level=info msg="StopPodSandbox for \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\"" Jan 17 12:11:42.729093 containerd[1471]: time="2025-01-17T12:11:42.729073467Z" level=info msg="TearDown network for sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" successfully" Jan 17 12:11:42.729121 containerd[1471]: time="2025-01-17T12:11:42.729090890Z" level=info msg="StopPodSandbox for \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" returns successfully" Jan 17 12:11:42.729558 containerd[1471]: time="2025-01-17T12:11:42.729469818Z" level=info msg="RemovePodSandbox for \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\"" Jan 17 12:11:42.729558 containerd[1471]: time="2025-01-17T12:11:42.729505627Z" level=info msg="Forcibly stopping sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\"" Jan 17 12:11:42.729646 containerd[1471]: time="2025-01-17T12:11:42.729559188Z" level=info msg="TearDown network for sandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" successfully" Jan 17 12:11:42.733768 containerd[1471]: time="2025-01-17T12:11:42.733721889Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:11:42.733768 containerd[1471]: time="2025-01-17T12:11:42.733762416Z" level=info msg="RemovePodSandbox \"7d9595934f8029d49a61e9bbe53d260babb05e813fac780246f3cad78cfc6a04\" returns successfully" Jan 17 12:11:42.734075 containerd[1471]: time="2025-01-17T12:11:42.734023912Z" level=info msg="StopPodSandbox for \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\"" Jan 17 12:11:42.797785 containerd[1471]: 2025-01-17 12:11:42.766 [WARNING][6421] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf1e6aaa-53c2-4de6-a445-b92ba78d0548", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90", Pod:"coredns-7db6d8ff4d-4grjz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90580dcda42", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:42.797785 containerd[1471]: 2025-01-17 12:11:42.766 [INFO][6421] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:11:42.797785 containerd[1471]: 2025-01-17 12:11:42.766 [INFO][6421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" iface="eth0" netns="" Jan 17 12:11:42.797785 containerd[1471]: 2025-01-17 12:11:42.766 [INFO][6421] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:11:42.797785 containerd[1471]: 2025-01-17 12:11:42.766 [INFO][6421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:11:42.797785 containerd[1471]: 2025-01-17 12:11:42.787 [INFO][6428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" HandleID="k8s-pod-network.fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Workload="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:42.797785 containerd[1471]: 2025-01-17 12:11:42.787 [INFO][6428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:42.797785 containerd[1471]: 2025-01-17 12:11:42.787 [INFO][6428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:42.797785 containerd[1471]: 2025-01-17 12:11:42.792 [WARNING][6428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" HandleID="k8s-pod-network.fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Workload="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:42.797785 containerd[1471]: 2025-01-17 12:11:42.792 [INFO][6428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" HandleID="k8s-pod-network.fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Workload="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:42.797785 containerd[1471]: 2025-01-17 12:11:42.793 [INFO][6428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:42.797785 containerd[1471]: 2025-01-17 12:11:42.795 [INFO][6421] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:11:42.798241 containerd[1471]: time="2025-01-17T12:11:42.797814405Z" level=info msg="TearDown network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\" successfully" Jan 17 12:11:42.798241 containerd[1471]: time="2025-01-17T12:11:42.797847598Z" level=info msg="StopPodSandbox for \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\" returns successfully" Jan 17 12:11:42.798338 containerd[1471]: time="2025-01-17T12:11:42.798302350Z" level=info msg="RemovePodSandbox for \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\"" Jan 17 12:11:42.798338 containerd[1471]: time="2025-01-17T12:11:42.798336504Z" level=info msg="Forcibly stopping sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\"" Jan 17 12:11:42.864928 containerd[1471]: 2025-01-17 12:11:42.833 [WARNING][6450] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf1e6aaa-53c2-4de6-a445-b92ba78d0548", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15f37b6b81f37f9b18c2286f0cda00dc2ca2624f70db44dd1afaa253fb75ce90", Pod:"coredns-7db6d8ff4d-4grjz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90580dcda42", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:42.864928 containerd[1471]: 2025-01-17 12:11:42.833 [INFO][6450] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:11:42.864928 containerd[1471]: 2025-01-17 12:11:42.833 [INFO][6450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" iface="eth0" netns="" Jan 17 12:11:42.864928 containerd[1471]: 2025-01-17 12:11:42.833 [INFO][6450] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:11:42.864928 containerd[1471]: 2025-01-17 12:11:42.833 [INFO][6450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:11:42.864928 containerd[1471]: 2025-01-17 12:11:42.854 [INFO][6457] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" HandleID="k8s-pod-network.fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Workload="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:42.864928 containerd[1471]: 2025-01-17 12:11:42.854 [INFO][6457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:42.864928 containerd[1471]: 2025-01-17 12:11:42.854 [INFO][6457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:42.864928 containerd[1471]: 2025-01-17 12:11:42.859 [WARNING][6457] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" HandleID="k8s-pod-network.fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Workload="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:42.864928 containerd[1471]: 2025-01-17 12:11:42.859 [INFO][6457] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" HandleID="k8s-pod-network.fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Workload="localhost-k8s-coredns--7db6d8ff4d--4grjz-eth0" Jan 17 12:11:42.864928 containerd[1471]: 2025-01-17 12:11:42.860 [INFO][6457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:42.864928 containerd[1471]: 2025-01-17 12:11:42.862 [INFO][6450] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e" Jan 17 12:11:42.864928 containerd[1471]: time="2025-01-17T12:11:42.864888933Z" level=info msg="TearDown network for sandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\" successfully" Jan 17 12:11:42.869930 containerd[1471]: time="2025-01-17T12:11:42.869840150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:11:42.869930 containerd[1471]: time="2025-01-17T12:11:42.869921103Z" level=info msg="RemovePodSandbox \"fcc7b47a09a4c0c7abe0693b3c4006325a41bb73a651e05da026b949a2eeda3e\" returns successfully" Jan 17 12:11:42.870528 containerd[1471]: time="2025-01-17T12:11:42.870491184Z" level=info msg="StopPodSandbox for \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\"" Jan 17 12:11:42.942962 containerd[1471]: 2025-01-17 12:11:42.909 [WARNING][6480] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0", GenerateName:"calico-apiserver-c8975f968-", Namespace:"calico-apiserver", SelfLink:"", UID:"50e92775-825e-4d1d-9a42-956f2281a0b9", ResourceVersion:"1271", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8975f968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142", Pod:"calico-apiserver-c8975f968-4wqpn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa81a242015", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:42.942962 containerd[1471]: 2025-01-17 12:11:42.910 [INFO][6480] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:11:42.942962 containerd[1471]: 2025-01-17 12:11:42.910 [INFO][6480] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" iface="eth0" netns="" Jan 17 12:11:42.942962 containerd[1471]: 2025-01-17 12:11:42.910 [INFO][6480] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:11:42.942962 containerd[1471]: 2025-01-17 12:11:42.910 [INFO][6480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:11:42.942962 containerd[1471]: 2025-01-17 12:11:42.932 [INFO][6487] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" HandleID="k8s-pod-network.6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Workload="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:42.942962 containerd[1471]: 2025-01-17 12:11:42.932 [INFO][6487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:42.942962 containerd[1471]: 2025-01-17 12:11:42.933 [INFO][6487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:42.942962 containerd[1471]: 2025-01-17 12:11:42.937 [WARNING][6487] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" HandleID="k8s-pod-network.6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Workload="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:42.942962 containerd[1471]: 2025-01-17 12:11:42.937 [INFO][6487] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" HandleID="k8s-pod-network.6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Workload="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:42.942962 containerd[1471]: 2025-01-17 12:11:42.938 [INFO][6487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:42.942962 containerd[1471]: 2025-01-17 12:11:42.940 [INFO][6480] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:11:42.943476 containerd[1471]: time="2025-01-17T12:11:42.943005244Z" level=info msg="TearDown network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\" successfully" Jan 17 12:11:42.943476 containerd[1471]: time="2025-01-17T12:11:42.943036123Z" level=info msg="StopPodSandbox for \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\" returns successfully" Jan 17 12:11:42.943571 containerd[1471]: time="2025-01-17T12:11:42.943544807Z" level=info msg="RemovePodSandbox for \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\"" Jan 17 12:11:42.943632 containerd[1471]: time="2025-01-17T12:11:42.943577109Z" level=info msg="Forcibly stopping sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\"" Jan 17 12:11:43.006521 containerd[1471]: 2025-01-17 12:11:42.976 [WARNING][6509] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0", GenerateName:"calico-apiserver-c8975f968-", Namespace:"calico-apiserver", SelfLink:"", UID:"50e92775-825e-4d1d-9a42-956f2281a0b9", ResourceVersion:"1271", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8975f968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e751af9269cd7391d8f7a6d4e0b112864f3011d9e7dd402ac1611676dc81142", Pod:"calico-apiserver-c8975f968-4wqpn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa81a242015", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:43.006521 containerd[1471]: 2025-01-17 12:11:42.976 [INFO][6509] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:11:43.006521 containerd[1471]: 2025-01-17 12:11:42.976 [INFO][6509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" iface="eth0" netns="" Jan 17 12:11:43.006521 containerd[1471]: 2025-01-17 12:11:42.976 [INFO][6509] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:11:43.006521 containerd[1471]: 2025-01-17 12:11:42.976 [INFO][6509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:11:43.006521 containerd[1471]: 2025-01-17 12:11:42.996 [INFO][6517] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" HandleID="k8s-pod-network.6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Workload="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:43.006521 containerd[1471]: 2025-01-17 12:11:42.996 [INFO][6517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:43.006521 containerd[1471]: 2025-01-17 12:11:42.996 [INFO][6517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:43.006521 containerd[1471]: 2025-01-17 12:11:43.001 [WARNING][6517] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" HandleID="k8s-pod-network.6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Workload="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:43.006521 containerd[1471]: 2025-01-17 12:11:43.001 [INFO][6517] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" HandleID="k8s-pod-network.6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Workload="localhost-k8s-calico--apiserver--c8975f968--4wqpn-eth0" Jan 17 12:11:43.006521 containerd[1471]: 2025-01-17 12:11:43.002 [INFO][6517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:43.006521 containerd[1471]: 2025-01-17 12:11:43.004 [INFO][6509] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce" Jan 17 12:11:43.006959 containerd[1471]: time="2025-01-17T12:11:43.006578044Z" level=info msg="TearDown network for sandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\" successfully" Jan 17 12:11:43.011057 containerd[1471]: time="2025-01-17T12:11:43.011010877Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:11:43.011105 containerd[1471]: time="2025-01-17T12:11:43.011068846Z" level=info msg="RemovePodSandbox \"6313efe415de64df572118aece4b6b6d6857826c96efb69cdaacff9f1e2715ce\" returns successfully" Jan 17 12:11:43.011689 containerd[1471]: time="2025-01-17T12:11:43.011655008Z" level=info msg="StopPodSandbox for \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\"" Jan 17 12:11:43.077562 containerd[1471]: 2025-01-17 12:11:43.044 [WARNING][6539] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7d963b6e-e967-461b-88c1-043d231c7107", ResourceVersion:"1266", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b", Pod:"coredns-7db6d8ff4d-hmksg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd790bff320", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:43.077562 containerd[1471]: 2025-01-17 12:11:43.044 [INFO][6539] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:11:43.077562 containerd[1471]: 2025-01-17 12:11:43.044 [INFO][6539] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" iface="eth0" netns="" Jan 17 12:11:43.077562 containerd[1471]: 2025-01-17 12:11:43.044 [INFO][6539] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:11:43.077562 containerd[1471]: 2025-01-17 12:11:43.044 [INFO][6539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:11:43.077562 containerd[1471]: 2025-01-17 12:11:43.066 [INFO][6546] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" HandleID="k8s-pod-network.5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Workload="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:43.077562 containerd[1471]: 2025-01-17 12:11:43.066 [INFO][6546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:43.077562 containerd[1471]: 2025-01-17 12:11:43.066 [INFO][6546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:43.077562 containerd[1471]: 2025-01-17 12:11:43.071 [WARNING][6546] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" HandleID="k8s-pod-network.5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Workload="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:43.077562 containerd[1471]: 2025-01-17 12:11:43.071 [INFO][6546] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" HandleID="k8s-pod-network.5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Workload="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:43.077562 containerd[1471]: 2025-01-17 12:11:43.072 [INFO][6546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:43.077562 containerd[1471]: 2025-01-17 12:11:43.074 [INFO][6539] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:11:43.078041 containerd[1471]: time="2025-01-17T12:11:43.077637583Z" level=info msg="TearDown network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\" successfully" Jan 17 12:11:43.078041 containerd[1471]: time="2025-01-17T12:11:43.077670305Z" level=info msg="StopPodSandbox for \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\" returns successfully" Jan 17 12:11:43.078300 containerd[1471]: time="2025-01-17T12:11:43.078277286Z" level=info msg="RemovePodSandbox for \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\"" Jan 17 12:11:43.078357 containerd[1471]: time="2025-01-17T12:11:43.078312673Z" level=info msg="Forcibly stopping sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\"" Jan 17 12:11:43.147979 containerd[1471]: 2025-01-17 12:11:43.111 [WARNING][6568] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7d963b6e-e967-461b-88c1-043d231c7107", ResourceVersion:"1266", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d8b5de7211200aa76c3b47087b901e49436765e5524ffa07df7a5742ddb539b", Pod:"coredns-7db6d8ff4d-hmksg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd790bff320", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:43.147979 containerd[1471]: 2025-01-17 12:11:43.111 [INFO][6568] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:11:43.147979 containerd[1471]: 2025-01-17 12:11:43.111 [INFO][6568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" iface="eth0" netns="" Jan 17 12:11:43.147979 containerd[1471]: 2025-01-17 12:11:43.111 [INFO][6568] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:11:43.147979 containerd[1471]: 2025-01-17 12:11:43.111 [INFO][6568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:11:43.147979 containerd[1471]: 2025-01-17 12:11:43.137 [INFO][6575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" HandleID="k8s-pod-network.5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Workload="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:43.147979 containerd[1471]: 2025-01-17 12:11:43.137 [INFO][6575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:43.147979 containerd[1471]: 2025-01-17 12:11:43.137 [INFO][6575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:43.147979 containerd[1471]: 2025-01-17 12:11:43.141 [WARNING][6575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" HandleID="k8s-pod-network.5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Workload="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:43.147979 containerd[1471]: 2025-01-17 12:11:43.141 [INFO][6575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" HandleID="k8s-pod-network.5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Workload="localhost-k8s-coredns--7db6d8ff4d--hmksg-eth0" Jan 17 12:11:43.147979 containerd[1471]: 2025-01-17 12:11:43.142 [INFO][6575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:43.147979 containerd[1471]: 2025-01-17 12:11:43.144 [INFO][6568] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708" Jan 17 12:11:43.147979 containerd[1471]: time="2025-01-17T12:11:43.147944105Z" level=info msg="TearDown network for sandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\" successfully" Jan 17 12:11:43.152566 containerd[1471]: time="2025-01-17T12:11:43.152532433Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:11:43.152651 containerd[1471]: time="2025-01-17T12:11:43.152601975Z" level=info msg="RemovePodSandbox \"5b00bd0f21476614cadc9119c23166b6b426f37ed0691a0548cdbc4ecac79708\" returns successfully" Jan 17 12:11:43.153150 containerd[1471]: time="2025-01-17T12:11:43.153121610Z" level=info msg="StopPodSandbox for \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\"" Jan 17 12:11:43.216930 containerd[1471]: 2025-01-17 12:11:43.187 [WARNING][6599] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0", GenerateName:"calico-apiserver-c8975f968-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6c7969a-d094-4962-9f3d-83a3ce21e375", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8975f968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34", Pod:"calico-apiserver-c8975f968-wpfqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali022a8eb8e5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:43.216930 containerd[1471]: 2025-01-17 12:11:43.187 [INFO][6599] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:11:43.216930 containerd[1471]: 2025-01-17 12:11:43.187 [INFO][6599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" iface="eth0" netns="" Jan 17 12:11:43.216930 containerd[1471]: 2025-01-17 12:11:43.187 [INFO][6599] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:11:43.216930 containerd[1471]: 2025-01-17 12:11:43.187 [INFO][6599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:11:43.216930 containerd[1471]: 2025-01-17 12:11:43.206 [INFO][6606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" HandleID="k8s-pod-network.469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Workload="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:43.216930 containerd[1471]: 2025-01-17 12:11:43.206 [INFO][6606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:43.216930 containerd[1471]: 2025-01-17 12:11:43.206 [INFO][6606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:43.216930 containerd[1471]: 2025-01-17 12:11:43.211 [WARNING][6606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" HandleID="k8s-pod-network.469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Workload="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:43.216930 containerd[1471]: 2025-01-17 12:11:43.211 [INFO][6606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" HandleID="k8s-pod-network.469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Workload="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:43.216930 containerd[1471]: 2025-01-17 12:11:43.212 [INFO][6606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:43.216930 containerd[1471]: 2025-01-17 12:11:43.214 [INFO][6599] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:11:43.217552 containerd[1471]: time="2025-01-17T12:11:43.217493142Z" level=info msg="TearDown network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\" successfully" Jan 17 12:11:43.217552 containerd[1471]: time="2025-01-17T12:11:43.217531565Z" level=info msg="StopPodSandbox for \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\" returns successfully" Jan 17 12:11:43.218191 containerd[1471]: time="2025-01-17T12:11:43.218161448Z" level=info msg="RemovePodSandbox for \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\"" Jan 17 12:11:43.218238 containerd[1471]: time="2025-01-17T12:11:43.218196164Z" level=info msg="Forcibly stopping sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\"" Jan 17 12:11:43.285918 containerd[1471]: 2025-01-17 12:11:43.253 [WARNING][6629] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0", GenerateName:"calico-apiserver-c8975f968-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6c7969a-d094-4962-9f3d-83a3ce21e375", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8975f968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1588cf2bb179ded0c795647a1952dccf8b0cd1b97250bee1a55071b920474f34", Pod:"calico-apiserver-c8975f968-wpfqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali022a8eb8e5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:11:43.285918 containerd[1471]: 2025-01-17 12:11:43.253 [INFO][6629] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:11:43.285918 containerd[1471]: 2025-01-17 12:11:43.253 [INFO][6629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" iface="eth0" netns="" Jan 17 12:11:43.285918 containerd[1471]: 2025-01-17 12:11:43.253 [INFO][6629] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:11:43.285918 containerd[1471]: 2025-01-17 12:11:43.253 [INFO][6629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:11:43.285918 containerd[1471]: 2025-01-17 12:11:43.274 [INFO][6637] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" HandleID="k8s-pod-network.469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Workload="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:43.285918 containerd[1471]: 2025-01-17 12:11:43.274 [INFO][6637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:11:43.285918 containerd[1471]: 2025-01-17 12:11:43.274 [INFO][6637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:11:43.285918 containerd[1471]: 2025-01-17 12:11:43.280 [WARNING][6637] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" HandleID="k8s-pod-network.469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Workload="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:43.285918 containerd[1471]: 2025-01-17 12:11:43.280 [INFO][6637] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" HandleID="k8s-pod-network.469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Workload="localhost-k8s-calico--apiserver--c8975f968--wpfqg-eth0" Jan 17 12:11:43.285918 containerd[1471]: 2025-01-17 12:11:43.281 [INFO][6637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:11:43.285918 containerd[1471]: 2025-01-17 12:11:43.283 [INFO][6629] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd" Jan 17 12:11:43.286337 containerd[1471]: time="2025-01-17T12:11:43.285964716Z" level=info msg="TearDown network for sandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\" successfully" Jan 17 12:11:43.290300 containerd[1471]: time="2025-01-17T12:11:43.290267993Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:11:43.290344 containerd[1471]: time="2025-01-17T12:11:43.290316104Z" level=info msg="RemovePodSandbox \"469ce088eca40d8c6d8b6881ec1d683d2cc7ab8c07f877505d639019caac0bcd\" returns successfully" Jan 17 12:11:43.315424 kubelet[2607]: E0117 12:11:43.315385 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:43.847147 systemd[1]: Started sshd@30-10.0.0.49:22-10.0.0.1:33770.service - OpenSSH per-connection server daemon (10.0.0.1:33770). Jan 17 12:11:43.887583 sshd[6645]: Accepted publickey for core from 10.0.0.1 port 33770 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:43.889343 sshd[6645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:43.893692 systemd-logind[1458]: New session 31 of user core. Jan 17 12:11:43.907822 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 17 12:11:44.064119 sshd[6645]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:44.067773 systemd[1]: sshd@30-10.0.0.49:22-10.0.0.1:33770.service: Deactivated successfully. Jan 17 12:11:44.070952 systemd[1]: session-31.scope: Deactivated successfully. Jan 17 12:11:44.072633 systemd-logind[1458]: Session 31 logged out. Waiting for processes to exit. Jan 17 12:11:44.073899 systemd-logind[1458]: Removed session 31. Jan 17 12:11:49.074956 systemd[1]: Started sshd@31-10.0.0.49:22-10.0.0.1:58976.service - OpenSSH per-connection server daemon (10.0.0.1:58976). Jan 17 12:11:49.112185 sshd[6659]: Accepted publickey for core from 10.0.0.1 port 58976 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:49.113732 sshd[6659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:49.118696 systemd-logind[1458]: New session 32 of user core. Jan 17 12:11:49.128814 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 17 12:11:49.238053 sshd[6659]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:49.241764 systemd[1]: sshd@31-10.0.0.49:22-10.0.0.1:58976.service: Deactivated successfully. Jan 17 12:11:49.243644 systemd[1]: session-32.scope: Deactivated successfully. Jan 17 12:11:49.244294 systemd-logind[1458]: Session 32 logged out. Waiting for processes to exit. Jan 17 12:11:49.245227 systemd-logind[1458]: Removed session 32.