Jan 13 21:29:08.866833 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:29:08.866860 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:29:08.866875 kernel: BIOS-provided physical RAM map: Jan 13 21:29:08.866884 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:29:08.866892 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:29:08.866900 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:29:08.866910 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 21:29:08.866919 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 21:29:08.866927 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:29:08.866939 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 21:29:08.866947 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:29:08.866956 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:29:08.866964 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:29:08.866973 kernel: NX (Execute Disable) protection: active Jan 13 21:29:08.866983 kernel: APIC: Static calls initialized Jan 13 21:29:08.867010 kernel: SMBIOS 2.8 present. Jan 13 21:29:08.867020 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 21:29:08.867029 kernel: Hypervisor detected: KVM Jan 13 21:29:08.867039 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:29:08.867048 kernel: kvm-clock: using sched offset of 2122417850 cycles Jan 13 21:29:08.867067 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:29:08.867077 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:29:08.867087 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:29:08.867096 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:29:08.867106 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 21:29:08.867120 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:29:08.867130 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:29:08.867139 kernel: Using GB pages for direct mapping Jan 13 21:29:08.867149 kernel: ACPI: Early table checksum verification disabled Jan 13 21:29:08.867158 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 21:29:08.867168 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:08.867178 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:08.867187 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:08.867200 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 21:29:08.867210 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:08.867219 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:08.867229 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:08.867238 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:08.867248 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 21:29:08.867257 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 21:29:08.867272 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 21:29:08.867284 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 21:29:08.867294 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 21:29:08.867304 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 21:29:08.867314 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 21:29:08.867324 kernel: No NUMA configuration found Jan 13 21:29:08.867333 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 21:29:08.867343 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 21:29:08.867357 kernel: Zone ranges: Jan 13 21:29:08.867366 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:29:08.867376 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 21:29:08.867386 kernel: Normal empty Jan 13 21:29:08.867396 kernel: Movable zone start for each node Jan 13 21:29:08.867406 kernel: Early memory node ranges Jan 13 21:29:08.867415 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:29:08.867425 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 21:29:08.867435 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 21:29:08.867448 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:29:08.867458 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:29:08.867468 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 21:29:08.867478 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:29:08.867488 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:29:08.867498 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:29:08.867507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:29:08.867518 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:29:08.867527 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:29:08.867541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:29:08.867551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:29:08.867560 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:29:08.867570 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:29:08.867580 kernel: TSC deadline timer available Jan 13 21:29:08.867590 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:29:08.867600 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:29:08.867610 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:29:08.867619 kernel: kvm-guest: setup PV sched yield Jan 13 21:29:08.867629 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 21:29:08.867642 kernel: Booting paravirtualized kernel on KVM Jan 13 21:29:08.867653 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:29:08.867663 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:29:08.867673 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:29:08.867685 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:29:08.867696 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:29:08.867707 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:29:08.867718 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:29:08.867729 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:29:08.867743 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:29:08.867753 kernel: random: crng init done Jan 13 21:29:08.867763 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:29:08.867773 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:29:08.867783 kernel: Fallback order for Node 0: 0 Jan 13 21:29:08.867793 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 21:29:08.867803 kernel: Policy zone: DMA32 Jan 13 21:29:08.867812 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:29:08.867826 kernel: Memory: 2434588K/2571752K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 136904K reserved, 0K cma-reserved) Jan 13 21:29:08.867836 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:29:08.867846 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:29:08.867856 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:29:08.867866 kernel: Dynamic Preempt: voluntary Jan 13 21:29:08.867876 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:29:08.867887 kernel: rcu: RCU event tracing is enabled. Jan 13 21:29:08.867897 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:29:08.867907 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:29:08.867920 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:29:08.867930 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:29:08.867941 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:29:08.867950 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:29:08.867960 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:29:08.867971 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:29:08.867980 kernel: Console: colour VGA+ 80x25 Jan 13 21:29:08.867990 kernel: printk: console [ttyS0] enabled Jan 13 21:29:08.868041 kernel: ACPI: Core revision 20230628 Jan 13 21:29:08.868063 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:29:08.868073 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:29:08.868083 kernel: x2apic enabled Jan 13 21:29:08.868093 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:29:08.868103 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:29:08.868113 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:29:08.868123 kernel: kvm-guest: setup PV IPIs Jan 13 21:29:08.868146 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:29:08.868156 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:29:08.868167 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:29:08.868178 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:29:08.868188 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:29:08.868202 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:29:08.868212 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:29:08.868222 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:29:08.868233 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:29:08.868247 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:29:08.868257 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:29:08.868268 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:29:08.868278 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:29:08.868289 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:29:08.868299 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:29:08.868311 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:29:08.868321 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:29:08.868331 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:29:08.868345 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:29:08.868355 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:29:08.868366 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:29:08.868376 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:29:08.868387 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:29:08.868397 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:29:08.868408 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:29:08.868418 kernel: landlock: Up and running. Jan 13 21:29:08.868428 kernel: SELinux: Initializing. Jan 13 21:29:08.868441 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:29:08.868452 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:29:08.868462 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:29:08.868473 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:29:08.868483 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:29:08.868494 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:29:08.868504 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:29:08.868515 kernel: ... version: 0 Jan 13 21:29:08.868528 kernel: ... bit width: 48 Jan 13 21:29:08.868538 kernel: ... generic registers: 6 Jan 13 21:29:08.868549 kernel: ... value mask: 0000ffffffffffff Jan 13 21:29:08.868559 kernel: ... max period: 00007fffffffffff Jan 13 21:29:08.868569 kernel: ... fixed-purpose events: 0 Jan 13 21:29:08.868580 kernel: ... event mask: 000000000000003f Jan 13 21:29:08.868590 kernel: signal: max sigframe size: 1776 Jan 13 21:29:08.868601 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:29:08.868611 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:29:08.868622 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:29:08.868636 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:29:08.868646 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:29:08.868657 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:29:08.868667 kernel: smpboot: Max logical packages: 1 Jan 13 21:29:08.868678 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:29:08.868689 kernel: devtmpfs: initialized Jan 13 21:29:08.868699 kernel: x86/mm: Memory block size: 128MB Jan 13 21:29:08.868710 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:29:08.868721 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:29:08.868735 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:29:08.868745 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:29:08.868756 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:29:08.868766 kernel: audit: type=2000 audit(1736803748.762:1): state=initialized audit_enabled=0 res=1 Jan 13 21:29:08.868776 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:29:08.868787 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:29:08.868797 kernel: cpuidle: using governor menu Jan 13 21:29:08.868807 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:29:08.868818 kernel: dca service started, version 1.12.1 Jan 13 21:29:08.868831 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:29:08.868842 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:29:08.868852 kernel: PCI: Using configuration type 1 for base access Jan 13 21:29:08.868863 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:29:08.868874 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:29:08.868884 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:29:08.868895 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:29:08.868906 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:29:08.868916 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:29:08.868930 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:29:08.868941 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:29:08.868952 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:29:08.868962 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:29:08.868973 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:29:08.868984 kernel: ACPI: Interpreter enabled Jan 13 21:29:08.869008 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:29:08.869018 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:29:08.869029 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:29:08.869044 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:29:08.869062 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:29:08.869073 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:29:08.869288 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:29:08.869448 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:29:08.869601 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:29:08.869617 kernel: PCI host bridge to bus 0000:00 Jan 13 21:29:08.869785 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:29:08.869941 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:29:08.870205 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:29:08.870352 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:29:08.870495 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:29:08.870636 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 21:29:08.870778 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:29:08.870959 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:29:08.871151 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:29:08.871302 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 21:29:08.871445 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 21:29:08.871618 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 21:29:08.871838 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:29:08.872078 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:29:08.872258 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 21:29:08.872396 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 21:29:08.872517 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 21:29:08.872646 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:29:08.872767 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:29:08.872887 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 21:29:08.873036 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 21:29:08.873191 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:29:08.873313 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 21:29:08.873433 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 21:29:08.873552 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 21:29:08.873683 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 21:29:08.873898 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:29:08.874070 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:29:08.874211 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:29:08.874331 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 21:29:08.874450 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 21:29:08.874576 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:29:08.874697 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 21:29:08.874707 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:29:08.874720 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:29:08.874728 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:29:08.874736 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:29:08.874743 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:29:08.874751 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:29:08.874759 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:29:08.874767 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:29:08.874774 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:29:08.874782 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:29:08.874792 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:29:08.874799 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:29:08.874807 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:29:08.874814 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:29:08.874822 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:29:08.874829 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:29:08.874837 kernel: iommu: Default domain type: Translated Jan 13 21:29:08.874845 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:29:08.874853 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:29:08.874862 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:29:08.874870 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:29:08.874877 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 21:29:08.875103 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:29:08.875227 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:29:08.875344 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:29:08.875355 kernel: vgaarb: loaded Jan 13 21:29:08.875362 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:29:08.875374 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:29:08.875382 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:29:08.875390 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:29:08.875397 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:29:08.875405 kernel: pnp: PnP ACPI init Jan 13 21:29:08.875531 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:29:08.875542 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:29:08.875550 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:29:08.875560 kernel: NET: Registered PF_INET protocol family Jan 13 21:29:08.875568 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:29:08.875575 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:29:08.875583 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:29:08.875591 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:29:08.875598 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:29:08.875606 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:29:08.875613 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:29:08.875621 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:29:08.875631 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:29:08.875638 kernel: NET: Registered PF_XDP protocol family Jan 13 21:29:08.875754 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:29:08.875863 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:29:08.875971 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:29:08.876120 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:29:08.876232 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:29:08.876340 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 21:29:08.876354 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:29:08.876361 kernel: Initialise system trusted keyrings Jan 13 21:29:08.876369 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:29:08.876377 kernel: Key type asymmetric registered Jan 13 21:29:08.876385 kernel: Asymmetric key parser 'x509' registered Jan 13 21:29:08.876392 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:29:08.876400 kernel: io scheduler mq-deadline registered Jan 13 21:29:08.876407 kernel: io scheduler kyber registered Jan 13 21:29:08.876415 kernel: io scheduler bfq registered Jan 13 21:29:08.876425 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:29:08.876433 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:29:08.876440 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:29:08.876448 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:29:08.876456 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:29:08.876463 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:29:08.876471 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:29:08.876479 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:29:08.876486 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:29:08.876609 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:29:08.876621 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:29:08.876731 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:29:08.876842 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:29:08 UTC (1736803748) Jan 13 21:29:08.876954 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:29:08.876964 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:29:08.876972 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:29:08.876979 kernel: Segment Routing with IPv6 Jan 13 21:29:08.876990 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:29:08.877091 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:29:08.877099 kernel: Key type dns_resolver registered Jan 13 21:29:08.877107 kernel: IPI shorthand broadcast: enabled Jan 13 21:29:08.877114 kernel: sched_clock: Marking stable (543002264, 102790511)->(688379738, -42586963) Jan 13 21:29:08.877122 kernel: registered taskstats version 1 Jan 13 21:29:08.877130 kernel: Loading compiled-in X.509 certificates Jan 13 21:29:08.877137 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:29:08.877145 kernel: Key type .fscrypt registered Jan 13 21:29:08.877156 kernel: Key type fscrypt-provisioning registered Jan 13 21:29:08.877164 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:29:08.877171 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:29:08.877179 kernel: ima: No architecture policies found Jan 13 21:29:08.877186 kernel: clk: Disabling unused clocks Jan 13 21:29:08.877194 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:29:08.877201 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:29:08.877209 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:29:08.877216 kernel: Run /init as init process Jan 13 21:29:08.877226 kernel: with arguments: Jan 13 21:29:08.877234 kernel: /init Jan 13 21:29:08.877241 kernel: with environment: Jan 13 21:29:08.877248 kernel: HOME=/ Jan 13 21:29:08.877256 kernel: TERM=linux Jan 13 21:29:08.877263 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:29:08.877273 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:29:08.877282 systemd[1]: Detected virtualization kvm. Jan 13 21:29:08.877293 systemd[1]: Detected architecture x86-64. Jan 13 21:29:08.877301 systemd[1]: Running in initrd. Jan 13 21:29:08.877309 systemd[1]: No hostname configured, using default hostname. Jan 13 21:29:08.877317 systemd[1]: Hostname set to . Jan 13 21:29:08.877325 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:29:08.877333 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:29:08.877341 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:29:08.877349 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:29:08.877361 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:29:08.877380 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:29:08.877391 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:29:08.877400 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:29:08.877410 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:29:08.877421 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:29:08.877429 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:29:08.877437 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:29:08.877446 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:29:08.877454 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:29:08.877462 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:29:08.877470 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:29:08.877479 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:29:08.877489 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:29:08.877498 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:29:08.877506 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:29:08.877514 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:29:08.877522 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:29:08.877533 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:29:08.877541 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:29:08.877550 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:29:08.877560 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:29:08.877568 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:29:08.877577 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:29:08.877585 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:29:08.877593 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:29:08.877602 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:08.877610 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:29:08.877619 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:29:08.877627 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:29:08.877638 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:29:08.877666 systemd-journald[192]: Collecting audit messages is disabled. Jan 13 21:29:08.877687 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:29:08.877696 systemd-journald[192]: Journal started Jan 13 21:29:08.877716 systemd-journald[192]: Runtime Journal (/run/log/journal/fdde899c0540432cba4c0f47c9c7e76d) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:29:08.872192 systemd-modules-load[193]: Inserted module 'overlay' Jan 13 21:29:08.904019 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:29:08.910009 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:29:08.912460 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 13 21:29:08.913409 kernel: Bridge firewalling registered Jan 13 21:29:08.914159 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:29:08.915948 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:29:08.918174 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:29:08.919806 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:08.923150 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:29:08.927096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:29:08.929426 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:29:08.932302 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:29:08.942425 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:29:08.944601 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:29:08.949953 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:29:08.961770 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:29:08.980256 dracut-cmdline[228]: dracut-dracut-053 Jan 13 21:29:08.983306 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:29:08.994147 systemd-resolved[226]: Positive Trust Anchors: Jan 13 21:29:08.994165 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:29:08.994199 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:29:08.997024 systemd-resolved[226]: Defaulting to hostname 'linux'. Jan 13 21:29:08.998213 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:29:09.004391 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:29:09.072016 kernel: SCSI subsystem initialized Jan 13 21:29:09.081016 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:29:09.092024 kernel: iscsi: registered transport (tcp) Jan 13 21:29:09.113019 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:29:09.113037 kernel: QLogic iSCSI HBA Driver Jan 13 21:29:09.160947 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:29:09.174191 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:29:09.199039 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:29:09.199141 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:29:09.199157 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:29:09.244019 kernel: raid6: avx2x4 gen() 20780 MB/s Jan 13 21:29:09.261009 kernel: raid6: avx2x2 gen() 20652 MB/s Jan 13 21:29:09.278296 kernel: raid6: avx2x1 gen() 17794 MB/s Jan 13 21:29:09.278313 kernel: raid6: using algorithm avx2x4 gen() 20780 MB/s Jan 13 21:29:09.296308 kernel: raid6: .... xor() 5508 MB/s, rmw enabled Jan 13 21:29:09.296327 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:29:09.317017 kernel: xor: automatically using best checksumming function avx Jan 13 21:29:09.470019 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:29:09.482901 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:29:09.496248 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:29:09.508166 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 13 21:29:09.512679 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:29:09.524226 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:29:09.540042 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jan 13 21:29:09.573347 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:29:09.589268 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:29:09.650027 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:29:09.662117 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:29:09.676051 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:29:09.707401 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:29:09.707706 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:29:09.707730 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:29:09.707750 kernel: GPT:9289727 != 19775487 Jan 13 21:29:09.707773 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:29:09.707795 kernel: GPT:9289727 != 19775487 Jan 13 21:29:09.707815 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:29:09.707837 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:29:09.707854 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:29:09.707877 kernel: AES CTR mode by8 optimization enabled Jan 13 21:29:09.683482 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:29:09.713242 kernel: libata version 3.00 loaded. Jan 13 21:29:09.692224 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:29:09.693606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:29:09.720362 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:29:09.755287 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:29:09.755308 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:29:09.755463 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:29:09.755772 kernel: scsi host0: ahci Jan 13 21:29:09.756254 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (459) Jan 13 21:29:09.756268 kernel: scsi host1: ahci Jan 13 21:29:09.756413 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (467) Jan 13 21:29:09.756424 kernel: scsi host2: ahci Jan 13 21:29:09.756602 kernel: scsi host3: ahci Jan 13 21:29:09.756813 kernel: scsi host4: ahci Jan 13 21:29:09.756973 kernel: scsi host5: ahci Jan 13 21:29:09.757173 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 21:29:09.757186 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 21:29:09.757197 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 21:29:09.757235 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 21:29:09.757246 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 21:29:09.757256 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 21:29:09.695094 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:29:09.707226 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:29:09.718340 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:29:09.718454 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:29:09.722631 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:29:09.724716 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:29:09.724844 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:09.727312 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:09.733289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:09.736247 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:29:09.763554 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:29:09.790362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:09.797117 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:29:09.808796 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:29:09.813712 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:29:09.814961 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:29:09.830184 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:29:09.833222 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:29:09.840138 disk-uuid[565]: Primary Header is updated. Jan 13 21:29:09.840138 disk-uuid[565]: Secondary Entries is updated. Jan 13 21:29:09.840138 disk-uuid[565]: Secondary Header is updated. Jan 13 21:29:09.847011 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:29:09.849076 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:29:09.855377 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:29:10.064165 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:10.064252 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:10.064268 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:10.065036 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:10.066036 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:10.067035 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:29:10.068060 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:29:10.068081 kernel: ata3.00: applying bridge limits Jan 13 21:29:10.069078 kernel: ata3.00: configured for UDMA/100 Jan 13 21:29:10.070031 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:29:10.111520 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:29:10.123557 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:29:10.123574 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:29:10.851821 disk-uuid[567]: The operation has completed successfully. Jan 13 21:29:10.853498 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:29:10.875457 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:29:10.875579 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:29:10.908152 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:29:10.911449 sh[592]: Success Jan 13 21:29:10.923027 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:29:10.955196 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:29:10.975422 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:29:10.978618 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:29:10.990606 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:29:10.990686 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:29:10.990698 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:29:10.991602 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:29:10.992337 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:29:10.996942 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:29:10.998068 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:29:11.005303 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:29:11.008042 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:29:11.015568 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:29:11.015598 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:29:11.015609 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:29:11.019145 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:29:11.027732 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:29:11.029739 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:29:11.039926 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:29:11.049144 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:29:11.104013 ignition[680]: Ignition 2.19.0 Jan 13 21:29:11.104589 ignition[680]: Stage: fetch-offline Jan 13 21:29:11.104640 ignition[680]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:29:11.104651 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:29:11.104750 ignition[680]: parsed url from cmdline: "" Jan 13 21:29:11.104753 ignition[680]: no config URL provided Jan 13 21:29:11.104759 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:29:11.104767 ignition[680]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:29:11.104798 ignition[680]: op(1): [started] loading QEMU firmware config module Jan 13 21:29:11.104804 ignition[680]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:29:11.113739 ignition[680]: op(1): [finished] loading QEMU firmware config module Jan 13 21:29:11.113761 ignition[680]: QEMU firmware config was not found. Ignoring... Jan 13 21:29:11.130395 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:29:11.151324 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:29:11.157156 ignition[680]: parsing config with SHA512: 3f123db9bd2934c784350ba5d0f7a42ea83616dd2013b07fc231b17b1e66b9bf711da2b93d6b8da3fde6c9611866faf8e9dc2bd833f87b0c86629230f21c31dc Jan 13 21:29:11.160717 unknown[680]: fetched base config from "system" Jan 13 21:29:11.160730 unknown[680]: fetched user config from "qemu" Jan 13 21:29:11.161085 ignition[680]: fetch-offline: fetch-offline passed Jan 13 21:29:11.163863 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:29:11.161147 ignition[680]: Ignition finished successfully Jan 13 21:29:11.172225 systemd-networkd[782]: lo: Link UP Jan 13 21:29:11.172237 systemd-networkd[782]: lo: Gained carrier Jan 13 21:29:11.173759 systemd-networkd[782]: Enumeration completed Jan 13 21:29:11.173840 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:29:11.174415 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:29:11.174419 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:29:11.175669 systemd-networkd[782]: eth0: Link UP Jan 13 21:29:11.175672 systemd-networkd[782]: eth0: Gained carrier Jan 13 21:29:11.175679 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:29:11.176127 systemd[1]: Reached target network.target - Network. Jan 13 21:29:11.177927 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:29:11.184149 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:29:11.190051 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:29:11.197773 ignition[785]: Ignition 2.19.0 Jan 13 21:29:11.197784 ignition[785]: Stage: kargs Jan 13 21:29:11.197942 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:29:11.197953 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:29:11.198712 ignition[785]: kargs: kargs passed Jan 13 21:29:11.198754 ignition[785]: Ignition finished successfully Jan 13 21:29:11.202467 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:29:11.214120 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:29:11.226154 ignition[793]: Ignition 2.19.0 Jan 13 21:29:11.226165 ignition[793]: Stage: disks Jan 13 21:29:11.226344 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:29:11.226355 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:29:11.227130 ignition[793]: disks: disks passed Jan 13 21:29:11.229307 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:29:11.227177 ignition[793]: Ignition finished successfully Jan 13 21:29:11.230647 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:29:11.232215 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:29:11.234348 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:29:11.235377 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:29:11.237117 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:29:11.252248 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:29:11.263616 systemd-resolved[226]: Detected conflict on linux IN A 10.0.0.148 Jan 13 21:29:11.263631 systemd-resolved[226]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Jan 13 21:29:11.304393 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:29:11.545862 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:29:11.554090 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:29:11.642029 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:29:11.642652 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:29:11.643584 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:29:11.651082 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:29:11.652826 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:29:11.654178 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:29:11.659392 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Jan 13 21:29:11.659417 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:29:11.654218 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:29:11.665831 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:29:11.665852 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:29:11.665863 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:29:11.654241 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:29:11.662209 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:29:11.666898 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:29:11.669704 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:29:11.704423 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:29:11.708365 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:29:11.713838 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:29:11.718391 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:29:11.800784 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:29:11.811111 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:29:11.813702 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:29:11.823020 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:29:11.838698 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:29:11.844565 ignition[926]: INFO : Ignition 2.19.0 Jan 13 21:29:11.844565 ignition[926]: INFO : Stage: mount Jan 13 21:29:11.846216 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:29:11.846216 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:29:11.846216 ignition[926]: INFO : mount: mount passed Jan 13 21:29:11.846216 ignition[926]: INFO : Ignition finished successfully Jan 13 21:29:11.851869 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:29:11.864204 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:29:11.990074 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:29:12.008265 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:29:12.017266 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Jan 13 21:29:12.017323 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:29:12.017335 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:29:12.019015 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:29:12.022011 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:29:12.023151 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:29:12.050886 ignition[955]: INFO : Ignition 2.19.0 Jan 13 21:29:12.050886 ignition[955]: INFO : Stage: files Jan 13 21:29:12.052843 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:29:12.052843 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:29:12.052843 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:29:12.056588 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:29:12.056588 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:29:12.056588 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:29:12.056588 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:29:12.056588 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:29:12.056052 unknown[955]: wrote ssh authorized keys file for user: core Jan 13 21:29:12.064433 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:29:12.064433 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:29:12.093187 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:29:12.163404 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:29:12.163404 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:29:12.167552 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:29:12.167552 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:29:12.167552 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:29:12.167552 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:29:12.167552 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:29:12.167552 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:29:12.167552 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:29:12.167552 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:29:12.167552 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:29:12.167552 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:29:12.167552 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:29:12.167552 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:29:12.167552 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 21:29:12.526887 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:29:12.577154 systemd-networkd[782]: eth0: Gained IPv6LL Jan 13 21:29:12.910219 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:29:12.910219 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:29:12.914212 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:29:12.914212 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:29:12.914212 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:29:12.914212 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 21:29:12.914212 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:29:12.914212 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:29:12.914212 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 21:29:12.914212 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:29:12.937294 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:29:12.941793 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:29:12.943472 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:29:12.943472 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:29:12.943472 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:29:12.943472 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:29:12.943472 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:29:12.943472 ignition[955]: INFO : files: files passed Jan 13 21:29:12.943472 ignition[955]: INFO : Ignition finished successfully Jan 13 21:29:12.944739 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:29:12.956270 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:29:12.959428 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:29:12.960297 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:29:12.960446 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:29:12.968290 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:29:12.970776 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:29:12.972422 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:29:12.973958 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:29:12.973735 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:29:12.975374 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:29:12.987141 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:29:13.010642 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:29:13.010758 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:29:13.011222 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:29:13.013310 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:29:13.015477 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:29:13.019430 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:29:13.039066 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:29:13.052095 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:29:13.060757 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:29:13.062029 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:29:13.064245 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:29:13.066263 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:29:13.066370 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:29:13.068522 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:29:13.070242 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:29:13.072256 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:29:13.074288 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:29:13.076320 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:29:13.078467 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:29:13.080573 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:29:13.082821 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:29:13.084988 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:29:13.087179 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:29:13.088957 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:29:13.089169 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:29:13.091221 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:29:13.092887 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:29:13.094922 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:29:13.095038 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:29:13.097206 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:29:13.097324 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:29:13.099467 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:29:13.099578 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:29:13.101592 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:29:13.103295 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:29:13.109079 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:29:13.110809 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:29:13.112735 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:29:13.115114 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:29:13.115205 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:29:13.116910 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:29:13.117017 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:29:13.118790 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:29:13.118895 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:29:13.120804 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:29:13.120907 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:29:13.133125 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:29:13.134087 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:29:13.134199 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:29:13.137799 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:29:13.139569 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:29:13.139793 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:29:13.141872 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:29:13.145849 ignition[1010]: INFO : Ignition 2.19.0 Jan 13 21:29:13.145849 ignition[1010]: INFO : Stage: umount Jan 13 21:29:13.145849 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:29:13.145849 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:29:13.142107 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:29:13.151837 ignition[1010]: INFO : umount: umount passed Jan 13 21:29:13.151837 ignition[1010]: INFO : Ignition finished successfully Jan 13 21:29:13.147779 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:29:13.147925 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:29:13.151971 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:29:13.152515 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:29:13.154468 systemd[1]: Stopped target network.target - Network. Jan 13 21:29:13.155687 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:29:13.155761 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:29:13.157713 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:29:13.157767 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:29:13.159677 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:29:13.159729 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:29:13.161622 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:29:13.161670 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:29:13.164172 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:29:13.166022 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:29:13.169238 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:29:13.174042 systemd-networkd[782]: eth0: DHCPv6 lease lost Jan 13 21:29:13.174974 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:29:13.175134 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:29:13.178232 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:29:13.178365 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:29:13.181148 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:29:13.181205 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:29:13.191142 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:29:13.191762 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:29:13.191823 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:29:13.192505 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:29:13.192553 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:29:13.192671 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:29:13.192718 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:29:13.192891 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:29:13.192943 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:29:13.193336 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:29:13.200750 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:29:13.200875 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:29:13.214802 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:29:13.215017 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:29:13.217228 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:29:13.217281 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:29:13.219297 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:29:13.219338 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:29:13.221286 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:29:13.221333 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:29:13.223557 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:29:13.223605 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:29:13.225518 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:29:13.225580 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:29:13.238180 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:29:13.239348 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:29:13.239414 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:29:13.241720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:29:13.241769 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:13.245538 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:29:13.245661 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:29:13.358088 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:29:13.358213 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:29:13.359097 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:29:13.359362 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:29:13.359411 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:29:13.380140 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:29:13.386662 systemd[1]: Switching root. Jan 13 21:29:13.416544 systemd-journald[192]: Journal stopped Jan 13 21:29:14.500046 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 13 21:29:14.500114 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:29:14.500135 kernel: SELinux: policy capability open_perms=1 Jan 13 21:29:14.500146 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:29:14.500158 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:29:14.500170 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:29:14.500181 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:29:14.500192 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:29:14.500203 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:29:14.500217 kernel: audit: type=1403 audit(1736803753.797:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:29:14.500233 systemd[1]: Successfully loaded SELinux policy in 39.704ms. Jan 13 21:29:14.500255 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.202ms. Jan 13 21:29:14.500271 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:29:14.500283 systemd[1]: Detected virtualization kvm. Jan 13 21:29:14.500296 systemd[1]: Detected architecture x86-64. Jan 13 21:29:14.500309 systemd[1]: Detected first boot. Jan 13 21:29:14.500324 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:29:14.500336 zram_generator::config[1054]: No configuration found. Jan 13 21:29:14.500352 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:29:14.500364 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:29:14.500376 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:29:14.500389 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:29:14.500402 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:29:14.500414 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:29:14.500426 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:29:14.500442 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:29:14.500457 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:29:14.500469 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:29:14.500481 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:29:14.500493 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:29:14.500504 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:29:14.500517 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:29:14.500529 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:29:14.500542 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:29:14.500554 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:29:14.500569 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:29:14.500581 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:29:14.500593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:29:14.500605 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:29:14.500617 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:29:14.500629 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:29:14.500641 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:29:14.500655 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:29:14.500668 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:29:14.500680 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:29:14.500691 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:29:14.500703 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:29:14.500715 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:29:14.500727 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:29:14.500739 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:29:14.500751 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:29:14.500763 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:29:14.500777 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:29:14.500789 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:29:14.500806 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:29:14.500818 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:14.500830 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:29:14.500842 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:29:14.500854 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:29:14.500867 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:29:14.500882 systemd[1]: Reached target machines.target - Containers. Jan 13 21:29:14.500894 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:29:14.500914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:29:14.500927 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:29:14.500939 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:29:14.500951 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:29:14.500963 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:29:14.500975 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:29:14.500987 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:29:14.501014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:29:14.501026 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:29:14.501038 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:29:14.501050 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:29:14.501062 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:29:14.501074 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:29:14.501087 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:29:14.501103 kernel: fuse: init (API version 7.39) Jan 13 21:29:14.501117 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:29:14.501129 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:29:14.501141 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:29:14.501153 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:29:14.501166 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:29:14.501178 systemd[1]: Stopped verity-setup.service. Jan 13 21:29:14.501191 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:14.501203 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:29:14.501215 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:29:14.501229 kernel: loop: module loaded Jan 13 21:29:14.501241 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:29:14.501274 systemd-journald[1128]: Collecting audit messages is disabled. Jan 13 21:29:14.501296 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:29:14.501311 systemd-journald[1128]: Journal started Jan 13 21:29:14.501333 systemd-journald[1128]: Runtime Journal (/run/log/journal/fdde899c0540432cba4c0f47c9c7e76d) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:29:14.281733 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:29:14.296817 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:29:14.297249 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:29:14.504522 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:29:14.505527 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:29:14.507155 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:29:14.508670 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:29:14.510554 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:29:14.512465 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:29:14.512813 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:29:14.515017 kernel: ACPI: bus type drm_connector registered Jan 13 21:29:14.515283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:29:14.515503 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:29:14.517699 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:29:14.517925 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:29:14.519686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:29:14.519911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:29:14.521799 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:29:14.522040 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:29:14.523851 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:29:14.524086 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:29:14.525779 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:29:14.527600 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:29:14.529472 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:29:14.546156 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:29:14.557121 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:29:14.559827 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:29:14.561270 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:29:14.561312 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:29:14.563863 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:29:14.566672 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:29:14.569440 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:29:14.570917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:29:14.573407 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:29:14.577135 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:29:14.578667 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:29:14.581053 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:29:14.582696 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:29:14.584056 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:29:14.586709 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:29:14.600710 systemd-journald[1128]: Time spent on flushing to /var/log/journal/fdde899c0540432cba4c0f47c9c7e76d is 13.225ms for 952 entries. Jan 13 21:29:14.600710 systemd-journald[1128]: System Journal (/var/log/journal/fdde899c0540432cba4c0f47c9c7e76d) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:29:14.623051 systemd-journald[1128]: Received client request to flush runtime journal. Jan 13 21:29:14.623093 kernel: loop0: detected capacity change from 0 to 205544 Jan 13 21:29:14.592207 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:29:14.595366 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:29:14.597149 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:29:14.598867 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:29:14.602353 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:29:14.604422 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:29:14.613333 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:29:14.624455 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:29:14.630600 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:29:14.632845 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:29:14.635068 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:29:14.648803 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:29:14.652244 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:29:14.653628 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:29:14.660207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:29:14.662675 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:29:14.663256 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:29:14.685022 kernel: loop1: detected capacity change from 0 to 142488 Jan 13 21:29:14.696432 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 13 21:29:14.696613 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 13 21:29:14.703203 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:29:14.719017 kernel: loop2: detected capacity change from 0 to 140768 Jan 13 21:29:14.749022 kernel: loop3: detected capacity change from 0 to 205544 Jan 13 21:29:14.758021 kernel: loop4: detected capacity change from 0 to 142488 Jan 13 21:29:14.766014 kernel: loop5: detected capacity change from 0 to 140768 Jan 13 21:29:14.776691 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:29:14.778267 (sd-merge)[1192]: Merged extensions into '/usr'. Jan 13 21:29:14.782366 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:29:14.782381 systemd[1]: Reloading... Jan 13 21:29:14.853038 zram_generator::config[1221]: No configuration found. Jan 13 21:29:14.909577 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:29:14.971405 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:29:15.033226 systemd[1]: Reloading finished in 250 ms. Jan 13 21:29:15.071311 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:29:15.072895 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:29:15.087170 systemd[1]: Starting ensure-sysext.service... Jan 13 21:29:15.089843 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:29:15.095530 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:29:15.095549 systemd[1]: Reloading... Jan 13 21:29:15.111863 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:29:15.112302 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:29:15.113317 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:29:15.113615 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 13 21:29:15.113696 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 13 21:29:15.117184 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:29:15.117194 systemd-tmpfiles[1256]: Skipping /boot Jan 13 21:29:15.133684 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:29:15.133699 systemd-tmpfiles[1256]: Skipping /boot Jan 13 21:29:15.151146 zram_generator::config[1284]: No configuration found. Jan 13 21:29:15.267837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:29:15.325304 systemd[1]: Reloading finished in 229 ms. Jan 13 21:29:15.345287 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:29:15.353734 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:29:15.361171 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:29:15.364101 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:29:15.366685 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:29:15.370499 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:29:15.374437 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:29:15.377796 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:29:15.384294 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:29:15.389485 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:15.389669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:29:15.392428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:29:15.396251 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:29:15.400034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:29:15.402124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:29:15.402230 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:15.403084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:29:15.403250 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:29:15.407355 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:29:15.407517 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:29:15.409325 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:29:15.409514 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:29:15.414595 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:29:15.416520 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:29:15.418200 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Jan 13 21:29:15.418275 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:29:15.426712 augenrules[1351]: No rules Jan 13 21:29:15.431323 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:29:15.435424 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:29:15.440284 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:29:15.442217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:15.442429 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:29:15.449359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:29:15.456867 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:29:15.461370 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:29:15.465565 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:29:15.467178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:29:15.474358 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:29:15.475940 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:15.478394 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:29:15.481184 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:29:15.485040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:29:15.485294 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:29:15.487397 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:29:15.487620 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:29:15.491731 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:29:15.491982 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:29:15.494275 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:29:15.494450 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:29:15.503606 systemd[1]: Finished ensure-sysext.service. Jan 13 21:29:15.520356 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:29:15.529043 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1368) Jan 13 21:29:15.539355 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:29:15.540706 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:29:15.540756 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:29:15.549228 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:29:15.549458 systemd-resolved[1326]: Positive Trust Anchors: Jan 13 21:29:15.549471 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:29:15.549508 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:29:15.550522 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:29:15.550942 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:29:15.554392 systemd-resolved[1326]: Defaulting to hostname 'linux'. Jan 13 21:29:15.558194 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:29:15.563181 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:29:15.570246 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:29:15.574029 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:29:15.578219 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:29:15.578490 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:29:15.578668 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:29:15.579234 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:29:15.580188 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:29:15.594348 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:29:15.598011 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:29:15.632246 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:15.634471 systemd-networkd[1400]: lo: Link UP Jan 13 21:29:15.634485 systemd-networkd[1400]: lo: Gained carrier Jan 13 21:29:15.642307 systemd-networkd[1400]: Enumeration completed Jan 13 21:29:15.642415 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:29:15.645119 systemd[1]: Reached target network.target - Network. Jan 13 21:29:15.648355 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:29:15.648367 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:29:15.649968 systemd-networkd[1400]: eth0: Link UP Jan 13 21:29:15.649980 systemd-networkd[1400]: eth0: Gained carrier Jan 13 21:29:15.650003 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:29:15.654704 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:29:15.657006 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:29:15.655978 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:29:15.657217 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:29:15.708157 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:29:15.710692 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Jan 13 21:29:16.547925 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:29:16.548088 systemd-timesyncd[1401]: Initial clock synchronization to Mon 2025-01-13 21:29:16.547674 UTC. Jan 13 21:29:16.548141 systemd-resolved[1326]: Clock change detected. Flushing caches. Jan 13 21:29:16.569229 kernel: kvm_amd: TSC scaling supported Jan 13 21:29:16.569307 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:29:16.569337 kernel: kvm_amd: Nested Paging enabled Jan 13 21:29:16.569388 kernel: kvm_amd: LBR virtualization supported Jan 13 21:29:16.569425 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:29:16.569476 kernel: kvm_amd: Virtual GIF supported Jan 13 21:29:16.576224 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:16.591217 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:29:16.636610 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:29:16.649336 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:29:16.659185 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:29:16.704017 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:29:16.705589 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:29:16.706714 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:29:16.707892 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:29:16.709166 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:29:16.710628 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:29:16.711884 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:29:16.713302 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:29:16.714553 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:29:16.714581 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:29:16.715506 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:29:16.716988 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:29:16.719596 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:29:16.727693 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:29:16.730350 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:29:16.732003 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:29:16.733228 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:29:16.734246 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:29:16.734739 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:29:16.734770 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:29:16.736296 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:29:16.738531 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:29:16.743217 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:29:16.743634 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:29:16.746366 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:29:16.747704 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:29:16.750280 jq[1428]: false Jan 13 21:29:16.751218 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:29:16.754283 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:29:16.756747 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:29:16.765000 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:29:16.767097 extend-filesystems[1429]: Found loop3 Jan 13 21:29:16.768128 extend-filesystems[1429]: Found loop4 Jan 13 21:29:16.768128 extend-filesystems[1429]: Found loop5 Jan 13 21:29:16.768128 extend-filesystems[1429]: Found sr0 Jan 13 21:29:16.768128 extend-filesystems[1429]: Found vda Jan 13 21:29:16.768128 extend-filesystems[1429]: Found vda1 Jan 13 21:29:16.768128 extend-filesystems[1429]: Found vda2 Jan 13 21:29:16.768128 extend-filesystems[1429]: Found vda3 Jan 13 21:29:16.768128 extend-filesystems[1429]: Found usr Jan 13 21:29:16.768128 extend-filesystems[1429]: Found vda4 Jan 13 21:29:16.768128 extend-filesystems[1429]: Found vda6 Jan 13 21:29:16.768128 extend-filesystems[1429]: Found vda7 Jan 13 21:29:16.768128 extend-filesystems[1429]: Found vda9 Jan 13 21:29:16.768128 extend-filesystems[1429]: Checking size of /dev/vda9 Jan 13 21:29:16.771833 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:29:16.768713 dbus-daemon[1427]: [system] SELinux support is enabled Jan 13 21:29:16.775887 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:29:16.791642 extend-filesystems[1429]: Resized partition /dev/vda9 Jan 13 21:29:16.796873 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:29:16.777632 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:29:16.797267 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:29:16.778549 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:29:16.791395 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:29:16.792758 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:29:16.798285 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:29:16.801585 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1384) Jan 13 21:29:16.803062 jq[1443]: true Jan 13 21:29:16.810885 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:29:16.811186 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:29:16.811650 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:29:16.811914 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:29:16.814856 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:29:16.815169 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:29:16.823238 update_engine[1442]: I20250113 21:29:16.822991 1442 main.cc:92] Flatcar Update Engine starting Jan 13 21:29:16.827281 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:29:16.827318 update_engine[1442]: I20250113 21:29:16.826324 1442 update_check_scheduler.cc:74] Next update check in 2m57s Jan 13 21:29:16.834972 jq[1453]: true Jan 13 21:29:16.844004 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:29:16.857593 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:29:16.858010 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:29:16.859046 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:29:16.862361 systemd-logind[1437]: New seat seat0. Jan 13 21:29:16.862718 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:29:16.862718 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:29:16.862718 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:29:16.867116 extend-filesystems[1429]: Resized filesystem in /dev/vda9 Jan 13 21:29:16.865568 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:29:16.865906 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:29:16.871958 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:29:16.874867 tar[1452]: linux-amd64/helm Jan 13 21:29:16.876642 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:29:16.877187 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:29:16.879883 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:29:16.879998 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:29:16.891733 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:29:16.900258 bash[1481]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:29:16.903706 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:29:16.905738 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:29:16.906492 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:29:16.927866 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:29:16.935778 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:29:16.951694 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:29:16.959731 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:29:16.960058 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:29:16.966521 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:29:16.979109 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:29:16.991469 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:29:16.993929 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:29:16.995659 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:29:17.041442 containerd[1454]: time="2025-01-13T21:29:17.041350275Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:29:17.065055 containerd[1454]: time="2025-01-13T21:29:17.065017800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:17.067247 containerd[1454]: time="2025-01-13T21:29:17.066984398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:29:17.067247 containerd[1454]: time="2025-01-13T21:29:17.067015516Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:29:17.067247 containerd[1454]: time="2025-01-13T21:29:17.067029803Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:29:17.067247 containerd[1454]: time="2025-01-13T21:29:17.067246329Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:29:17.067247 containerd[1454]: time="2025-01-13T21:29:17.067263151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:17.067426 containerd[1454]: time="2025-01-13T21:29:17.067328303Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:29:17.067426 containerd[1454]: time="2025-01-13T21:29:17.067340706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:17.067562 containerd[1454]: time="2025-01-13T21:29:17.067540751Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:29:17.067562 containerd[1454]: time="2025-01-13T21:29:17.067560278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:17.067612 containerd[1454]: time="2025-01-13T21:29:17.067575046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:29:17.067612 containerd[1454]: time="2025-01-13T21:29:17.067586607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:17.067714 containerd[1454]: time="2025-01-13T21:29:17.067676466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:17.067963 containerd[1454]: time="2025-01-13T21:29:17.067943587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:17.068110 containerd[1454]: time="2025-01-13T21:29:17.068089931Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:29:17.068110 containerd[1454]: time="2025-01-13T21:29:17.068106743Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:29:17.068292 containerd[1454]: time="2025-01-13T21:29:17.068273566Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:29:17.068369 containerd[1454]: time="2025-01-13T21:29:17.068346623Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:29:17.074183 containerd[1454]: time="2025-01-13T21:29:17.074158221Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:29:17.074241 containerd[1454]: time="2025-01-13T21:29:17.074209688Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:29:17.074241 containerd[1454]: time="2025-01-13T21:29:17.074225648Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:29:17.074279 containerd[1454]: time="2025-01-13T21:29:17.074243341Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:29:17.074279 containerd[1454]: time="2025-01-13T21:29:17.074257578Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:29:17.074401 containerd[1454]: time="2025-01-13T21:29:17.074380949Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:29:17.075640 containerd[1454]: time="2025-01-13T21:29:17.075588293Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:29:17.075811 containerd[1454]: time="2025-01-13T21:29:17.075792186Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:29:17.075846 containerd[1454]: time="2025-01-13T21:29:17.075813065Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:29:17.075846 containerd[1454]: time="2025-01-13T21:29:17.075830267Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:29:17.075880 containerd[1454]: time="2025-01-13T21:29:17.075848642Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:29:17.075880 containerd[1454]: time="2025-01-13T21:29:17.075864642Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:29:17.075914 containerd[1454]: time="2025-01-13T21:29:17.075880051Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:29:17.075914 containerd[1454]: time="2025-01-13T21:29:17.075897553Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:29:17.075957 containerd[1454]: time="2025-01-13T21:29:17.075915126Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:29:17.075957 containerd[1454]: time="2025-01-13T21:29:17.075928391Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:29:17.075957 containerd[1454]: time="2025-01-13T21:29:17.075950763Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:29:17.076007 containerd[1454]: time="2025-01-13T21:29:17.075965721Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:29:17.076007 containerd[1454]: time="2025-01-13T21:29:17.075990678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076040 containerd[1454]: time="2025-01-13T21:29:17.076009243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076040 containerd[1454]: time="2025-01-13T21:29:17.076025113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076095 containerd[1454]: time="2025-01-13T21:29:17.076041032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076095 containerd[1454]: time="2025-01-13T21:29:17.076066610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076095 containerd[1454]: time="2025-01-13T21:29:17.076081458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076152 containerd[1454]: time="2025-01-13T21:29:17.076096516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076152 containerd[1454]: time="2025-01-13T21:29:17.076112206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076152 containerd[1454]: time="2025-01-13T21:29:17.076129308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076220 containerd[1454]: time="2025-01-13T21:29:17.076156439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076220 containerd[1454]: time="2025-01-13T21:29:17.076172118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076220 containerd[1454]: time="2025-01-13T21:29:17.076187397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076220 containerd[1454]: time="2025-01-13T21:29:17.076212764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076292 containerd[1454]: time="2025-01-13T21:29:17.076231069Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:29:17.076292 containerd[1454]: time="2025-01-13T21:29:17.076262327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076292 containerd[1454]: time="2025-01-13T21:29:17.076275743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076342 containerd[1454]: time="2025-01-13T21:29:17.076289979Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:29:17.076361 containerd[1454]: time="2025-01-13T21:29:17.076348830Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:29:17.076379 containerd[1454]: time="2025-01-13T21:29:17.076368326Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:29:17.076404 containerd[1454]: time="2025-01-13T21:29:17.076379537Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:29:17.076404 containerd[1454]: time="2025-01-13T21:29:17.076394936Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:29:17.076439 containerd[1454]: time="2025-01-13T21:29:17.076407279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076439 containerd[1454]: time="2025-01-13T21:29:17.076422087Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:29:17.076439 containerd[1454]: time="2025-01-13T21:29:17.076435192Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:29:17.076488 containerd[1454]: time="2025-01-13T21:29:17.076446032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:29:17.076846 containerd[1454]: time="2025-01-13T21:29:17.076783515Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:29:17.076846 containerd[1454]: time="2025-01-13T21:29:17.076850430Z" level=info msg="Connect containerd service" Jan 13 21:29:17.076993 containerd[1454]: time="2025-01-13T21:29:17.076883823Z" level=info msg="using legacy CRI server" Jan 13 21:29:17.076993 containerd[1454]: time="2025-01-13T21:29:17.076890766Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:29:17.077028 containerd[1454]: time="2025-01-13T21:29:17.076994210Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:29:17.078085 containerd[1454]: time="2025-01-13T21:29:17.077896382Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:29:17.078085 containerd[1454]: time="2025-01-13T21:29:17.078040262Z" level=info msg="Start subscribing containerd event" Jan 13 21:29:17.078145 containerd[1454]: time="2025-01-13T21:29:17.078109702Z" level=info msg="Start recovering state" Jan 13 21:29:17.078270 containerd[1454]: time="2025-01-13T21:29:17.078250787Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:29:17.078270 containerd[1454]: time="2025-01-13T21:29:17.078265324Z" level=info msg="Start event monitor" Jan 13 21:29:17.078313 containerd[1454]: time="2025-01-13T21:29:17.078286503Z" level=info msg="Start snapshots syncer" Jan 13 21:29:17.078313 containerd[1454]: time="2025-01-13T21:29:17.078298215Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:29:17.078313 containerd[1454]: time="2025-01-13T21:29:17.078305920Z" level=info msg="Start streaming server" Jan 13 21:29:17.078372 containerd[1454]: time="2025-01-13T21:29:17.078305529Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:29:17.078392 containerd[1454]: time="2025-01-13T21:29:17.078369449Z" level=info msg="containerd successfully booted in 0.038066s" Jan 13 21:29:17.078508 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:29:17.236617 tar[1452]: linux-amd64/LICENSE Jan 13 21:29:17.236729 tar[1452]: linux-amd64/README.md Jan 13 21:29:17.252397 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:29:17.578441 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:29:17.580818 systemd[1]: Started sshd@0-10.0.0.148:22-10.0.0.1:49156.service - OpenSSH per-connection server daemon (10.0.0.1:49156). Jan 13 21:29:17.628307 sshd[1519]: Accepted publickey for core from 10.0.0.1 port 49156 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:29:17.630260 sshd[1519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:17.639939 systemd-logind[1437]: New session 1 of user core. Jan 13 21:29:17.641636 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:29:17.658459 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:29:17.669689 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:29:17.689525 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:29:17.693709 (systemd)[1523]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:29:17.810055 systemd[1523]: Queued start job for default target default.target. Jan 13 21:29:17.824445 systemd[1523]: Created slice app.slice - User Application Slice. Jan 13 21:29:17.824470 systemd[1523]: Reached target paths.target - Paths. Jan 13 21:29:17.824483 systemd[1523]: Reached target timers.target - Timers. Jan 13 21:29:17.825968 systemd[1523]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:29:17.837698 systemd[1523]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:29:17.837857 systemd[1523]: Reached target sockets.target - Sockets. Jan 13 21:29:17.837883 systemd[1523]: Reached target basic.target - Basic System. Jan 13 21:29:17.837930 systemd[1523]: Reached target default.target - Main User Target. Jan 13 21:29:17.837972 systemd[1523]: Startup finished in 136ms. Jan 13 21:29:17.838401 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:29:17.841059 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:29:17.903249 systemd[1]: Started sshd@1-10.0.0.148:22-10.0.0.1:59344.service - OpenSSH per-connection server daemon (10.0.0.1:59344). Jan 13 21:29:17.941018 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 59344 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:29:17.942522 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:17.946320 systemd-logind[1437]: New session 2 of user core. Jan 13 21:29:17.962322 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:29:18.018216 sshd[1534]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:18.030145 systemd[1]: sshd@1-10.0.0.148:22-10.0.0.1:59344.service: Deactivated successfully. Jan 13 21:29:18.031967 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:29:18.033426 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:29:18.040462 systemd[1]: Started sshd@2-10.0.0.148:22-10.0.0.1:59358.service - OpenSSH per-connection server daemon (10.0.0.1:59358). Jan 13 21:29:18.042935 systemd-logind[1437]: Removed session 2. Jan 13 21:29:18.076563 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 59358 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:29:18.078169 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:18.082246 systemd-logind[1437]: New session 3 of user core. Jan 13 21:29:18.089316 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:29:18.144127 sshd[1541]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:18.148017 systemd[1]: sshd@2-10.0.0.148:22-10.0.0.1:59358.service: Deactivated successfully. Jan 13 21:29:18.149803 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:29:18.150347 systemd-logind[1437]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:29:18.151099 systemd-logind[1437]: Removed session 3. Jan 13 21:29:18.404304 systemd-networkd[1400]: eth0: Gained IPv6LL Jan 13 21:29:18.407175 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:29:18.408952 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:29:18.422393 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:29:18.424665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:18.426734 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:29:18.446012 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:29:18.446369 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:29:18.448222 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:29:18.450273 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:29:19.124359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:19.126412 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:29:19.127926 systemd[1]: Startup finished in 675ms (kernel) + 5.106s (initrd) + 4.533s (userspace) = 10.314s. Jan 13 21:29:19.128557 (kubelet)[1569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:29:19.523887 kubelet[1569]: E0113 21:29:19.523701 1569 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:29:19.527689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:29:19.527883 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:29:28.154599 systemd[1]: Started sshd@3-10.0.0.148:22-10.0.0.1:36590.service - OpenSSH per-connection server daemon (10.0.0.1:36590). Jan 13 21:29:28.190225 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 36590 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:29:28.191677 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:28.195311 systemd-logind[1437]: New session 4 of user core. Jan 13 21:29:28.206314 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:29:28.260460 sshd[1583]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:28.269868 systemd[1]: sshd@3-10.0.0.148:22-10.0.0.1:36590.service: Deactivated successfully. Jan 13 21:29:28.271774 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:29:28.273217 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:29:28.283480 systemd[1]: Started sshd@4-10.0.0.148:22-10.0.0.1:36594.service - OpenSSH per-connection server daemon (10.0.0.1:36594). Jan 13 21:29:28.284363 systemd-logind[1437]: Removed session 4. Jan 13 21:29:28.314280 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 36594 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:29:28.315662 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:28.319374 systemd-logind[1437]: New session 5 of user core. Jan 13 21:29:28.329318 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:29:28.378232 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:28.394718 systemd[1]: sshd@4-10.0.0.148:22-10.0.0.1:36594.service: Deactivated successfully. Jan 13 21:29:28.396413 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:29:28.397930 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:29:28.407418 systemd[1]: Started sshd@5-10.0.0.148:22-10.0.0.1:36602.service - OpenSSH per-connection server daemon (10.0.0.1:36602). Jan 13 21:29:28.408447 systemd-logind[1437]: Removed session 5. Jan 13 21:29:28.440142 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 36602 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:29:28.441656 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:28.445224 systemd-logind[1437]: New session 6 of user core. Jan 13 21:29:28.453302 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:29:28.506535 sshd[1597]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:28.520951 systemd[1]: sshd@5-10.0.0.148:22-10.0.0.1:36602.service: Deactivated successfully. Jan 13 21:29:28.522916 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:29:28.524292 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:29:28.533550 systemd[1]: Started sshd@6-10.0.0.148:22-10.0.0.1:36614.service - OpenSSH per-connection server daemon (10.0.0.1:36614). Jan 13 21:29:28.534496 systemd-logind[1437]: Removed session 6. Jan 13 21:29:28.564654 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 36614 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:29:28.566072 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:28.569650 systemd-logind[1437]: New session 7 of user core. Jan 13 21:29:28.579315 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:29:28.637998 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:29:28.638346 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:29:28.654222 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 13 21:29:28.655919 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:28.667870 systemd[1]: sshd@6-10.0.0.148:22-10.0.0.1:36614.service: Deactivated successfully. Jan 13 21:29:28.669489 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:29:28.671223 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:29:28.672522 systemd[1]: Started sshd@7-10.0.0.148:22-10.0.0.1:36624.service - OpenSSH per-connection server daemon (10.0.0.1:36624). Jan 13 21:29:28.673293 systemd-logind[1437]: Removed session 7. Jan 13 21:29:28.718908 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 36624 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:29:28.720507 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:28.724386 systemd-logind[1437]: New session 8 of user core. Jan 13 21:29:28.734296 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:29:28.787670 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:29:28.787996 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:29:28.791233 sudo[1616]: pam_unix(sudo:session): session closed for user root Jan 13 21:29:28.797319 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:29:28.797659 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:29:28.815417 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:29:28.817311 auditctl[1619]: No rules Jan 13 21:29:28.818513 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:29:28.818758 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:29:28.820419 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:29:28.851826 augenrules[1637]: No rules Jan 13 21:29:28.853569 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:29:28.854874 sudo[1615]: pam_unix(sudo:session): session closed for user root Jan 13 21:29:28.856641 sshd[1612]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:28.866059 systemd[1]: sshd@7-10.0.0.148:22-10.0.0.1:36624.service: Deactivated successfully. Jan 13 21:29:28.867822 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:29:28.869250 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:29:28.880430 systemd[1]: Started sshd@8-10.0.0.148:22-10.0.0.1:36632.service - OpenSSH per-connection server daemon (10.0.0.1:36632). Jan 13 21:29:28.881358 systemd-logind[1437]: Removed session 8. Jan 13 21:29:28.913115 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 36632 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:29:28.914556 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:28.918324 systemd-logind[1437]: New session 9 of user core. Jan 13 21:29:28.930293 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:29:28.982658 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:29:28.982987 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:29:29.257396 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:29:29.257524 (dockerd)[1666]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:29:29.528306 dockerd[1666]: time="2025-01-13T21:29:29.528241653Z" level=info msg="Starting up" Jan 13 21:29:29.533885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:29:29.542453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:29.734813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:29.738944 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:29:29.846024 kubelet[1697]: E0113 21:29:29.845833 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:29:29.852878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:29:29.853079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:29:29.860834 systemd[1]: var-lib-docker-metacopy\x2dcheck2961429968-merged.mount: Deactivated successfully. Jan 13 21:29:29.897760 dockerd[1666]: time="2025-01-13T21:29:29.897704278Z" level=info msg="Loading containers: start." Jan 13 21:29:29.996216 kernel: Initializing XFRM netlink socket Jan 13 21:29:30.075127 systemd-networkd[1400]: docker0: Link UP Jan 13 21:29:30.098651 dockerd[1666]: time="2025-01-13T21:29:30.098565544Z" level=info msg="Loading containers: done." Jan 13 21:29:30.111824 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2255164433-merged.mount: Deactivated successfully. Jan 13 21:29:30.114692 dockerd[1666]: time="2025-01-13T21:29:30.114649377Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:29:30.114774 dockerd[1666]: time="2025-01-13T21:29:30.114753212Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:29:30.114890 dockerd[1666]: time="2025-01-13T21:29:30.114867446Z" level=info msg="Daemon has completed initialization" Jan 13 21:29:30.151919 dockerd[1666]: time="2025-01-13T21:29:30.151782114Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:29:30.152101 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:29:30.889572 containerd[1454]: time="2025-01-13T21:29:30.889518884Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 21:29:31.579955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2325575484.mount: Deactivated successfully. Jan 13 21:29:32.404383 containerd[1454]: time="2025-01-13T21:29:32.404327160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:32.405354 containerd[1454]: time="2025-01-13T21:29:32.405294264Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Jan 13 21:29:32.406420 containerd[1454]: time="2025-01-13T21:29:32.406393005Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:32.409342 containerd[1454]: time="2025-01-13T21:29:32.409314183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:32.410411 containerd[1454]: time="2025-01-13T21:29:32.410383137Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 1.520815081s" Jan 13 21:29:32.410452 containerd[1454]: time="2025-01-13T21:29:32.410411180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Jan 13 21:29:32.411685 containerd[1454]: time="2025-01-13T21:29:32.411617963Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 21:29:33.539896 containerd[1454]: time="2025-01-13T21:29:33.539813312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:33.540591 containerd[1454]: time="2025-01-13T21:29:33.540531439Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Jan 13 21:29:33.541734 containerd[1454]: time="2025-01-13T21:29:33.541699129Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:33.547054 containerd[1454]: time="2025-01-13T21:29:33.544601963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:33.547054 containerd[1454]: time="2025-01-13T21:29:33.546316728Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.134627181s" Jan 13 21:29:33.547054 containerd[1454]: time="2025-01-13T21:29:33.546339772Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Jan 13 21:29:33.547557 containerd[1454]: time="2025-01-13T21:29:33.547534242Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 21:29:34.942679 containerd[1454]: time="2025-01-13T21:29:34.942612612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:34.943723 containerd[1454]: time="2025-01-13T21:29:34.943684161Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Jan 13 21:29:34.944978 containerd[1454]: time="2025-01-13T21:29:34.944947801Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:34.947632 containerd[1454]: time="2025-01-13T21:29:34.947569347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:34.948484 containerd[1454]: time="2025-01-13T21:29:34.948451642Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.400886292s" Jan 13 21:29:34.948536 containerd[1454]: time="2025-01-13T21:29:34.948485265Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Jan 13 21:29:34.949342 containerd[1454]: time="2025-01-13T21:29:34.949278032Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 21:29:35.931611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269654374.mount: Deactivated successfully. Jan 13 21:29:36.642892 containerd[1454]: time="2025-01-13T21:29:36.642816110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:36.643649 containerd[1454]: time="2025-01-13T21:29:36.643605531Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Jan 13 21:29:36.644766 containerd[1454]: time="2025-01-13T21:29:36.644717756Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:36.647539 containerd[1454]: time="2025-01-13T21:29:36.647478544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:36.648085 containerd[1454]: time="2025-01-13T21:29:36.648049304Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 1.69874282s" Jan 13 21:29:36.648121 containerd[1454]: time="2025-01-13T21:29:36.648085532Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 21:29:36.648620 containerd[1454]: time="2025-01-13T21:29:36.648582083Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:29:37.247557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2544260248.mount: Deactivated successfully. Jan 13 21:29:37.876994 containerd[1454]: time="2025-01-13T21:29:37.876930305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:37.877833 containerd[1454]: time="2025-01-13T21:29:37.877758177Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:29:37.878960 containerd[1454]: time="2025-01-13T21:29:37.878930786Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:37.881732 containerd[1454]: time="2025-01-13T21:29:37.881702564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:37.882789 containerd[1454]: time="2025-01-13T21:29:37.882759797Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.234137278s" Jan 13 21:29:37.882836 containerd[1454]: time="2025-01-13T21:29:37.882788741Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:29:37.883419 containerd[1454]: time="2025-01-13T21:29:37.883306793Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 21:29:38.416904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3166086444.mount: Deactivated successfully. Jan 13 21:29:38.422813 containerd[1454]: time="2025-01-13T21:29:38.422761223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:38.423563 containerd[1454]: time="2025-01-13T21:29:38.423524644Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 13 21:29:38.424657 containerd[1454]: time="2025-01-13T21:29:38.424611042Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:38.428614 containerd[1454]: time="2025-01-13T21:29:38.428566159Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 545.216876ms" Jan 13 21:29:38.428614 containerd[1454]: time="2025-01-13T21:29:38.428599802Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 13 21:29:38.429262 containerd[1454]: time="2025-01-13T21:29:38.428974224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:38.429262 containerd[1454]: time="2025-01-13T21:29:38.429058472Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 21:29:38.974810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3389103150.mount: Deactivated successfully. Jan 13 21:29:40.021220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:29:40.031486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:40.177400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:40.182671 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:29:40.222307 kubelet[2010]: E0113 21:29:40.222252 2010 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:29:40.225764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:29:40.225951 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:29:41.519689 containerd[1454]: time="2025-01-13T21:29:41.519634936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:41.520454 containerd[1454]: time="2025-01-13T21:29:41.520392587Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 13 21:29:41.521626 containerd[1454]: time="2025-01-13T21:29:41.521592297Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:41.524419 containerd[1454]: time="2025-01-13T21:29:41.524381498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:41.525571 containerd[1454]: time="2025-01-13T21:29:41.525542174Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.096446583s" Jan 13 21:29:41.525606 containerd[1454]: time="2025-01-13T21:29:41.525570387Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 13 21:29:43.879407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:43.890417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:43.914868 systemd[1]: Reloading requested from client PID 2050 ('systemctl') (unit session-9.scope)... Jan 13 21:29:43.914883 systemd[1]: Reloading... Jan 13 21:29:44.003260 zram_generator::config[2093]: No configuration found. Jan 13 21:29:44.248123 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:29:44.324642 systemd[1]: Reloading finished in 409 ms. Jan 13 21:29:44.371953 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:29:44.372185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:44.374443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:44.531277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:44.536696 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:29:44.568532 kubelet[2138]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:29:44.568532 kubelet[2138]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:29:44.568532 kubelet[2138]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:29:44.569572 kubelet[2138]: I0113 21:29:44.569524 2138 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:29:44.910408 kubelet[2138]: I0113 21:29:44.910296 2138 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:29:44.910408 kubelet[2138]: I0113 21:29:44.910334 2138 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:29:44.910569 kubelet[2138]: I0113 21:29:44.910552 2138 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:29:44.931083 kubelet[2138]: E0113 21:29:44.931022 2138 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:44.931214 kubelet[2138]: I0113 21:29:44.931102 2138 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:29:44.939892 kubelet[2138]: E0113 21:29:44.939839 2138 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:29:44.939892 kubelet[2138]: I0113 21:29:44.939872 2138 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:29:44.945631 kubelet[2138]: I0113 21:29:44.945591 2138 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:29:44.946523 kubelet[2138]: I0113 21:29:44.946487 2138 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:29:44.946669 kubelet[2138]: I0113 21:29:44.946624 2138 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:29:44.946820 kubelet[2138]: I0113 21:29:44.946652 2138 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:29:44.946820 kubelet[2138]: I0113 21:29:44.946808 2138 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:29:44.946820 kubelet[2138]: I0113 21:29:44.946817 2138 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:29:44.946998 kubelet[2138]: I0113 21:29:44.946923 2138 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:29:44.948255 kubelet[2138]: I0113 21:29:44.948229 2138 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:29:44.948255 kubelet[2138]: I0113 21:29:44.948247 2138 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:29:44.948358 kubelet[2138]: I0113 21:29:44.948280 2138 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:29:44.948358 kubelet[2138]: I0113 21:29:44.948294 2138 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:29:44.950737 kubelet[2138]: W0113 21:29:44.950671 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 21:29:44.950825 kubelet[2138]: E0113 21:29:44.950740 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:44.952232 kubelet[2138]: I0113 21:29:44.952211 2138 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:29:44.952512 kubelet[2138]: W0113 21:29:44.952478 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 21:29:44.952565 kubelet[2138]: E0113 21:29:44.952516 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:44.953666 kubelet[2138]: I0113 21:29:44.953651 2138 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:29:44.954547 kubelet[2138]: W0113 21:29:44.954527 2138 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:29:44.955401 kubelet[2138]: I0113 21:29:44.955135 2138 server.go:1269] "Started kubelet" Jan 13 21:29:44.955753 kubelet[2138]: I0113 21:29:44.955703 2138 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:29:44.956127 kubelet[2138]: I0113 21:29:44.956104 2138 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:29:44.956218 kubelet[2138]: I0113 21:29:44.956163 2138 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:29:44.956372 kubelet[2138]: I0113 21:29:44.956346 2138 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:29:44.958146 kubelet[2138]: I0113 21:29:44.956721 2138 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:29:44.958146 kubelet[2138]: I0113 21:29:44.957045 2138 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:29:44.959117 kubelet[2138]: I0113 21:29:44.958499 2138 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:29:44.959117 kubelet[2138]: I0113 21:29:44.958566 2138 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:29:44.959117 kubelet[2138]: I0113 21:29:44.958618 2138 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:29:44.959117 kubelet[2138]: W0113 21:29:44.958846 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 21:29:44.959117 kubelet[2138]: E0113 21:29:44.958878 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:44.959117 kubelet[2138]: E0113 21:29:44.958988 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:29:44.959117 kubelet[2138]: E0113 21:29:44.959027 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="200ms" Jan 13 21:29:44.959779 kubelet[2138]: I0113 21:29:44.959756 2138 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:29:44.960772 kubelet[2138]: I0113 21:29:44.959849 2138 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:29:44.960772 kubelet[2138]: E0113 21:29:44.960669 2138 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:29:44.961909 kubelet[2138]: E0113 21:29:44.958993 2138 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5dd726cdbc61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:29:44.955116641 +0000 UTC m=+0.414443871,LastTimestamp:2025-01-13 21:29:44.955116641 +0000 UTC m=+0.414443871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:29:44.962138 kubelet[2138]: I0113 21:29:44.962110 2138 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:29:44.976022 kubelet[2138]: I0113 21:29:44.975958 2138 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:29:44.978784 kubelet[2138]: I0113 21:29:44.977650 2138 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:29:44.978784 kubelet[2138]: I0113 21:29:44.977680 2138 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:29:44.978784 kubelet[2138]: I0113 21:29:44.977697 2138 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:29:44.978784 kubelet[2138]: E0113 21:29:44.977738 2138 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:29:44.978784 kubelet[2138]: W0113 21:29:44.978141 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 21:29:44.978784 kubelet[2138]: E0113 21:29:44.978181 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:44.978784 kubelet[2138]: I0113 21:29:44.978583 2138 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:29:44.978784 kubelet[2138]: I0113 21:29:44.978592 2138 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:29:44.978784 kubelet[2138]: I0113 21:29:44.978607 2138 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:29:45.059357 kubelet[2138]: E0113 21:29:45.059321 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:29:45.078601 kubelet[2138]: E0113 21:29:45.078561 2138 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:29:45.160015 kubelet[2138]: E0113 21:29:45.159957 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:29:45.160257 kubelet[2138]: E0113 21:29:45.160217 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="400ms" Jan 13 21:29:45.260695 kubelet[2138]: E0113 21:29:45.260551 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:29:45.278673 kubelet[2138]: E0113 21:29:45.278621 2138 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:29:45.346480 kubelet[2138]: I0113 21:29:45.346440 2138 policy_none.go:49] "None policy: Start" Jan 13 21:29:45.347144 kubelet[2138]: I0113 21:29:45.347122 2138 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:29:45.347144 kubelet[2138]: I0113 21:29:45.347145 2138 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:29:45.352564 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:29:45.360918 kubelet[2138]: E0113 21:29:45.360889 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:29:45.362892 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:29:45.365647 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:29:45.377064 kubelet[2138]: I0113 21:29:45.377032 2138 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:29:45.377306 kubelet[2138]: I0113 21:29:45.377278 2138 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:29:45.377383 kubelet[2138]: I0113 21:29:45.377310 2138 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:29:45.377638 kubelet[2138]: I0113 21:29:45.377623 2138 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:29:45.378694 kubelet[2138]: E0113 21:29:45.378666 2138 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:29:45.478877 kubelet[2138]: I0113 21:29:45.478850 2138 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:29:45.479123 kubelet[2138]: E0113 21:29:45.479101 2138 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Jan 13 21:29:45.560735 kubelet[2138]: E0113 21:29:45.560689 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="800ms" Jan 13 21:29:45.680575 kubelet[2138]: I0113 21:29:45.680532 2138 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:29:45.680933 kubelet[2138]: E0113 21:29:45.680801 2138 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Jan 13 21:29:45.686484 systemd[1]: Created slice kubepods-burstable-podd4fbd11920367ef0fb89e5fdadc91831.slice - libcontainer container kubepods-burstable-podd4fbd11920367ef0fb89e5fdadc91831.slice. Jan 13 21:29:45.696380 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Jan 13 21:29:45.699575 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Jan 13 21:29:45.764326 kubelet[2138]: I0113 21:29:45.764260 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4fbd11920367ef0fb89e5fdadc91831-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d4fbd11920367ef0fb89e5fdadc91831\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:29:45.764326 kubelet[2138]: I0113 21:29:45.764310 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:29:45.764326 kubelet[2138]: I0113 21:29:45.764335 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:29:45.764496 kubelet[2138]: I0113 21:29:45.764360 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:29:45.764496 kubelet[2138]: I0113 21:29:45.764377 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4fbd11920367ef0fb89e5fdadc91831-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d4fbd11920367ef0fb89e5fdadc91831\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:29:45.764496 kubelet[2138]: I0113 21:29:45.764391 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:29:45.764496 kubelet[2138]: I0113 21:29:45.764406 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:29:45.764496 kubelet[2138]: I0113 21:29:45.764419 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:29:45.764608 kubelet[2138]: I0113 21:29:45.764443 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4fbd11920367ef0fb89e5fdadc91831-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d4fbd11920367ef0fb89e5fdadc91831\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:29:45.903381 kubelet[2138]: W0113 21:29:45.903223 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 21:29:45.903381 kubelet[2138]: E0113 21:29:45.903311 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:45.948184 kubelet[2138]: W0113 21:29:45.948136 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 21:29:45.948184 kubelet[2138]: E0113 21:29:45.948184 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:45.994385 kubelet[2138]: E0113 21:29:45.994358 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:45.994919 containerd[1454]: time="2025-01-13T21:29:45.994878345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d4fbd11920367ef0fb89e5fdadc91831,Namespace:kube-system,Attempt:0,}" Jan 13 21:29:45.999140 kubelet[2138]: E0113 21:29:45.999107 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:45.999545 containerd[1454]: time="2025-01-13T21:29:45.999513928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Jan 13 21:29:46.001832 kubelet[2138]: E0113 21:29:46.001809 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:46.002139 containerd[1454]: time="2025-01-13T21:29:46.002102272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Jan 13 21:29:46.082601 kubelet[2138]: I0113 21:29:46.082558 2138 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:29:46.082887 kubelet[2138]: E0113 21:29:46.082852 2138 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Jan 13 21:29:46.091188 kubelet[2138]: W0113 21:29:46.091153 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 21:29:46.091301 kubelet[2138]: E0113 21:29:46.091187 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:46.361760 kubelet[2138]: E0113 21:29:46.361711 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="1.6s" Jan 13 21:29:46.371405 kubelet[2138]: W0113 21:29:46.371321 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 21:29:46.371405 kubelet[2138]: E0113 21:29:46.371407 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:46.573970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1615592604.mount: Deactivated successfully. Jan 13 21:29:46.582894 containerd[1454]: time="2025-01-13T21:29:46.582848038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:46.583737 containerd[1454]: time="2025-01-13T21:29:46.583695988Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:29:46.585033 containerd[1454]: time="2025-01-13T21:29:46.584978864Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:46.585872 containerd[1454]: time="2025-01-13T21:29:46.585814341Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:46.586809 containerd[1454]: time="2025-01-13T21:29:46.586782827Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:46.587672 containerd[1454]: time="2025-01-13T21:29:46.587644353Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:29:46.588754 containerd[1454]: time="2025-01-13T21:29:46.588712456Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:29:46.590160 containerd[1454]: time="2025-01-13T21:29:46.590129463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:46.592452 containerd[1454]: time="2025-01-13T21:29:46.591986526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 597.034783ms" Jan 13 21:29:46.595944 containerd[1454]: time="2025-01-13T21:29:46.595920484Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 596.337906ms" Jan 13 21:29:46.596434 containerd[1454]: time="2025-01-13T21:29:46.596409110Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 594.251885ms" Jan 13 21:29:46.727668 containerd[1454]: time="2025-01-13T21:29:46.726800500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:46.728273 containerd[1454]: time="2025-01-13T21:29:46.728152856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:46.728273 containerd[1454]: time="2025-01-13T21:29:46.728234730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:46.728273 containerd[1454]: time="2025-01-13T21:29:46.728245410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:46.728532 containerd[1454]: time="2025-01-13T21:29:46.728474560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:46.729109 containerd[1454]: time="2025-01-13T21:29:46.728014878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:46.729109 containerd[1454]: time="2025-01-13T21:29:46.728816251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:46.729109 containerd[1454]: time="2025-01-13T21:29:46.728912050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:46.730394 containerd[1454]: time="2025-01-13T21:29:46.730045476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:46.730394 containerd[1454]: time="2025-01-13T21:29:46.730112311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:46.730394 containerd[1454]: time="2025-01-13T21:29:46.730131046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:46.730394 containerd[1454]: time="2025-01-13T21:29:46.730270167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:46.749416 systemd[1]: Started cri-containerd-dacff4a59b3b8a179bcdc085a0f0baf9ff93593049b005647f0297ea8a8a7be2.scope - libcontainer container dacff4a59b3b8a179bcdc085a0f0baf9ff93593049b005647f0297ea8a8a7be2. Jan 13 21:29:46.753256 systemd[1]: Started cri-containerd-23caa645424047ce7e0bd97b37310f79f61b22f75021b7092300119d95b95854.scope - libcontainer container 23caa645424047ce7e0bd97b37310f79f61b22f75021b7092300119d95b95854. Jan 13 21:29:46.754812 systemd[1]: Started cri-containerd-33371beeb5bdd7fb8ff038025fc0e416195a693dbf3b9ebb3e76fc22a0da1cf5.scope - libcontainer container 33371beeb5bdd7fb8ff038025fc0e416195a693dbf3b9ebb3e76fc22a0da1cf5. Jan 13 21:29:46.794906 containerd[1454]: time="2025-01-13T21:29:46.794230446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"23caa645424047ce7e0bd97b37310f79f61b22f75021b7092300119d95b95854\"" Jan 13 21:29:46.794906 containerd[1454]: time="2025-01-13T21:29:46.794541569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"dacff4a59b3b8a179bcdc085a0f0baf9ff93593049b005647f0297ea8a8a7be2\"" Jan 13 21:29:46.795933 kubelet[2138]: E0113 21:29:46.795743 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:46.795933 kubelet[2138]: E0113 21:29:46.795875 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:46.797568 containerd[1454]: time="2025-01-13T21:29:46.797530605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d4fbd11920367ef0fb89e5fdadc91831,Namespace:kube-system,Attempt:0,} returns sandbox id \"33371beeb5bdd7fb8ff038025fc0e416195a693dbf3b9ebb3e76fc22a0da1cf5\"" Jan 13 21:29:46.797763 containerd[1454]: time="2025-01-13T21:29:46.797733605Z" level=info msg="CreateContainer within sandbox \"23caa645424047ce7e0bd97b37310f79f61b22f75021b7092300119d95b95854\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:29:46.798174 containerd[1454]: time="2025-01-13T21:29:46.798141660Z" level=info msg="CreateContainer within sandbox \"dacff4a59b3b8a179bcdc085a0f0baf9ff93593049b005647f0297ea8a8a7be2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:29:46.799871 kubelet[2138]: E0113 21:29:46.799841 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:46.801567 containerd[1454]: time="2025-01-13T21:29:46.801479991Z" level=info msg="CreateContainer within sandbox \"33371beeb5bdd7fb8ff038025fc0e416195a693dbf3b9ebb3e76fc22a0da1cf5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:29:46.826782 containerd[1454]: time="2025-01-13T21:29:46.826744922Z" level=info msg="CreateContainer within sandbox \"dacff4a59b3b8a179bcdc085a0f0baf9ff93593049b005647f0297ea8a8a7be2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"28a3ba52f1baa7f7c14dff4d7488b21eabc35f51cefdec2a09cbab9588f60d6d\"" Jan 13 21:29:46.827310 containerd[1454]: time="2025-01-13T21:29:46.827275046Z" level=info msg="StartContainer for \"28a3ba52f1baa7f7c14dff4d7488b21eabc35f51cefdec2a09cbab9588f60d6d\"" Jan 13 21:29:46.830901 containerd[1454]: time="2025-01-13T21:29:46.830869878Z" level=info msg="CreateContainer within sandbox \"33371beeb5bdd7fb8ff038025fc0e416195a693dbf3b9ebb3e76fc22a0da1cf5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f277e8a1a60030447473d46aef7f78eddb2c618a334bc36e92d6d2f19fcccd36\"" Jan 13 21:29:46.831401 containerd[1454]: time="2025-01-13T21:29:46.831368132Z" level=info msg="StartContainer for \"f277e8a1a60030447473d46aef7f78eddb2c618a334bc36e92d6d2f19fcccd36\"" Jan 13 21:29:46.832415 containerd[1454]: time="2025-01-13T21:29:46.832384368Z" level=info msg="CreateContainer within sandbox \"23caa645424047ce7e0bd97b37310f79f61b22f75021b7092300119d95b95854\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d1fbdec2c098d317d22f05436a0a9f88599b1379cfb72d21cb207aa69c319864\"" Jan 13 21:29:46.832694 containerd[1454]: time="2025-01-13T21:29:46.832665635Z" level=info msg="StartContainer for \"d1fbdec2c098d317d22f05436a0a9f88599b1379cfb72d21cb207aa69c319864\"" Jan 13 21:29:46.854455 systemd[1]: Started cri-containerd-28a3ba52f1baa7f7c14dff4d7488b21eabc35f51cefdec2a09cbab9588f60d6d.scope - libcontainer container 28a3ba52f1baa7f7c14dff4d7488b21eabc35f51cefdec2a09cbab9588f60d6d. Jan 13 21:29:46.858903 systemd[1]: Started cri-containerd-d1fbdec2c098d317d22f05436a0a9f88599b1379cfb72d21cb207aa69c319864.scope - libcontainer container d1fbdec2c098d317d22f05436a0a9f88599b1379cfb72d21cb207aa69c319864. Jan 13 21:29:46.860582 systemd[1]: Started cri-containerd-f277e8a1a60030447473d46aef7f78eddb2c618a334bc36e92d6d2f19fcccd36.scope - libcontainer container f277e8a1a60030447473d46aef7f78eddb2c618a334bc36e92d6d2f19fcccd36. Jan 13 21:29:46.884547 kubelet[2138]: I0113 21:29:46.884514 2138 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:29:46.886330 kubelet[2138]: E0113 21:29:46.884781 2138 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Jan 13 21:29:46.894326 containerd[1454]: time="2025-01-13T21:29:46.894292047Z" level=info msg="StartContainer for \"28a3ba52f1baa7f7c14dff4d7488b21eabc35f51cefdec2a09cbab9588f60d6d\" returns successfully" Jan 13 21:29:46.908570 containerd[1454]: time="2025-01-13T21:29:46.908490173Z" level=info msg="StartContainer for \"d1fbdec2c098d317d22f05436a0a9f88599b1379cfb72d21cb207aa69c319864\" returns successfully" Jan 13 21:29:46.914404 containerd[1454]: time="2025-01-13T21:29:46.913961033Z" level=info msg="StartContainer for \"f277e8a1a60030447473d46aef7f78eddb2c618a334bc36e92d6d2f19fcccd36\" returns successfully" Jan 13 21:29:46.984474 kubelet[2138]: E0113 21:29:46.984371 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:46.987377 kubelet[2138]: E0113 21:29:46.987359 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:46.989364 kubelet[2138]: E0113 21:29:46.989347 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:47.965892 kubelet[2138]: E0113 21:29:47.965840 2138 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 21:29:47.991293 kubelet[2138]: E0113 21:29:47.991260 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:48.057452 kubelet[2138]: E0113 21:29:48.057413 2138 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 21:29:48.413338 kubelet[2138]: E0113 21:29:48.413304 2138 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 21:29:48.486517 kubelet[2138]: I0113 21:29:48.486480 2138 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:29:48.493011 kubelet[2138]: I0113 21:29:48.492974 2138 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 21:29:48.937351 kubelet[2138]: E0113 21:29:48.936955 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:48.955481 kubelet[2138]: I0113 21:29:48.955427 2138 apiserver.go:52] "Watching apiserver" Jan 13 21:29:48.959717 kubelet[2138]: I0113 21:29:48.959670 2138 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:29:48.991514 kubelet[2138]: E0113 21:29:48.991477 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:49.974491 systemd[1]: Reloading requested from client PID 2417 ('systemctl') (unit session-9.scope)... Jan 13 21:29:49.974506 systemd[1]: Reloading... Jan 13 21:29:50.053232 zram_generator::config[2456]: No configuration found. Jan 13 21:29:50.159283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:29:50.247840 systemd[1]: Reloading finished in 272 ms. Jan 13 21:29:50.293659 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:50.317612 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:29:50.317893 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:50.333447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:50.473047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:50.478549 (kubelet)[2501]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:29:50.517495 kubelet[2501]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:29:50.517495 kubelet[2501]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:29:50.517495 kubelet[2501]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:29:50.519211 kubelet[2501]: I0113 21:29:50.517947 2501 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:29:50.523373 kubelet[2501]: I0113 21:29:50.523331 2501 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:29:50.523373 kubelet[2501]: I0113 21:29:50.523351 2501 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:29:50.523565 kubelet[2501]: I0113 21:29:50.523507 2501 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:29:50.524568 kubelet[2501]: I0113 21:29:50.524548 2501 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:29:50.526117 kubelet[2501]: I0113 21:29:50.526085 2501 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:29:50.530172 kubelet[2501]: E0113 21:29:50.530136 2501 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:29:50.530172 kubelet[2501]: I0113 21:29:50.530162 2501 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:29:50.534694 kubelet[2501]: I0113 21:29:50.534660 2501 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:29:50.534794 kubelet[2501]: I0113 21:29:50.534779 2501 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:29:50.534954 kubelet[2501]: I0113 21:29:50.534920 2501 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:29:50.535098 kubelet[2501]: I0113 21:29:50.534947 2501 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:29:50.535174 kubelet[2501]: I0113 21:29:50.535098 2501 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:29:50.535174 kubelet[2501]: I0113 21:29:50.535108 2501 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:29:50.535174 kubelet[2501]: I0113 21:29:50.535140 2501 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:29:50.535311 kubelet[2501]: I0113 21:29:50.535264 2501 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:29:50.535311 kubelet[2501]: I0113 21:29:50.535275 2501 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:29:50.535311 kubelet[2501]: I0113 21:29:50.535306 2501 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:29:50.535372 kubelet[2501]: I0113 21:29:50.535321 2501 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:29:50.535754 kubelet[2501]: I0113 21:29:50.535730 2501 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:29:50.536428 kubelet[2501]: I0113 21:29:50.536067 2501 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:29:50.536627 kubelet[2501]: I0113 21:29:50.536458 2501 server.go:1269] "Started kubelet" Jan 13 21:29:50.537385 kubelet[2501]: I0113 21:29:50.537127 2501 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:29:50.537463 kubelet[2501]: I0113 21:29:50.537443 2501 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:29:50.537514 kubelet[2501]: I0113 21:29:50.537490 2501 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:29:50.537637 kubelet[2501]: I0113 21:29:50.537622 2501 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:29:50.538414 kubelet[2501]: I0113 21:29:50.538389 2501 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:29:50.547903 kubelet[2501]: I0113 21:29:50.545704 2501 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:29:50.547903 kubelet[2501]: I0113 21:29:50.546103 2501 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:29:50.547903 kubelet[2501]: I0113 21:29:50.546155 2501 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:29:50.547903 kubelet[2501]: I0113 21:29:50.546256 2501 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:29:50.547903 kubelet[2501]: E0113 21:29:50.546698 2501 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:29:50.550984 kubelet[2501]: I0113 21:29:50.550944 2501 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:29:50.551053 kubelet[2501]: I0113 21:29:50.551018 2501 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:29:50.552501 kubelet[2501]: E0113 21:29:50.551227 2501 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:29:50.552950 kubelet[2501]: I0113 21:29:50.552931 2501 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:29:50.554073 kubelet[2501]: I0113 21:29:50.554033 2501 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:29:50.555433 kubelet[2501]: I0113 21:29:50.555410 2501 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:29:50.555553 kubelet[2501]: I0113 21:29:50.555441 2501 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:29:50.555553 kubelet[2501]: I0113 21:29:50.555461 2501 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:29:50.555553 kubelet[2501]: E0113 21:29:50.555499 2501 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:29:50.584509 kubelet[2501]: I0113 21:29:50.584481 2501 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:29:50.584509 kubelet[2501]: I0113 21:29:50.584496 2501 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:29:50.584509 kubelet[2501]: I0113 21:29:50.584513 2501 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:29:50.584683 kubelet[2501]: I0113 21:29:50.584636 2501 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:29:50.584683 kubelet[2501]: I0113 21:29:50.584645 2501 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:29:50.584683 kubelet[2501]: I0113 21:29:50.584667 2501 policy_none.go:49] "None policy: Start" Jan 13 21:29:50.585278 kubelet[2501]: I0113 21:29:50.585181 2501 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:29:50.585278 kubelet[2501]: I0113 21:29:50.585214 2501 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:29:50.585398 kubelet[2501]: I0113 21:29:50.585327 2501 state_mem.go:75] "Updated machine memory state" Jan 13 21:29:50.591267 kubelet[2501]: I0113 21:29:50.591161 2501 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:29:50.591405 kubelet[2501]: I0113 21:29:50.591384 2501 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:29:50.591436 kubelet[2501]: I0113 21:29:50.591401 2501 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:29:50.591637 kubelet[2501]: I0113 21:29:50.591612 2501 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:29:50.662816 kubelet[2501]: E0113 21:29:50.662782 2501 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 13 21:29:50.697090 kubelet[2501]: I0113 21:29:50.697046 2501 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:29:50.706546 kubelet[2501]: I0113 21:29:50.706515 2501 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 13 21:29:50.706662 kubelet[2501]: I0113 21:29:50.706590 2501 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 21:29:50.746775 kubelet[2501]: I0113 21:29:50.746754 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:29:50.746851 kubelet[2501]: I0113 21:29:50.746778 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:29:50.746851 kubelet[2501]: I0113 21:29:50.746798 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:29:50.746851 kubelet[2501]: I0113 21:29:50.746813 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:29:50.746851 kubelet[2501]: I0113 21:29:50.746827 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4fbd11920367ef0fb89e5fdadc91831-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d4fbd11920367ef0fb89e5fdadc91831\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:29:50.746851 kubelet[2501]: I0113 21:29:50.746840 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4fbd11920367ef0fb89e5fdadc91831-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d4fbd11920367ef0fb89e5fdadc91831\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:29:50.746966 kubelet[2501]: I0113 21:29:50.746857 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4fbd11920367ef0fb89e5fdadc91831-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d4fbd11920367ef0fb89e5fdadc91831\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:29:50.746966 kubelet[2501]: I0113 21:29:50.746874 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:29:50.746966 kubelet[2501]: I0113 21:29:50.746891 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:29:50.962846 kubelet[2501]: E0113 21:29:50.962171 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:50.962846 kubelet[2501]: E0113 21:29:50.962530 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:50.963304 kubelet[2501]: E0113 21:29:50.963257 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:51.537292 kubelet[2501]: I0113 21:29:51.537183 2501 apiserver.go:52] "Watching apiserver" Jan 13 21:29:51.546382 kubelet[2501]: I0113 21:29:51.546331 2501 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:29:51.569654 kubelet[2501]: E0113 21:29:51.569605 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:51.578497 kubelet[2501]: E0113 21:29:51.577773 2501 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 21:29:51.578497 kubelet[2501]: E0113 21:29:51.577886 2501 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:29:51.578497 kubelet[2501]: E0113 21:29:51.577924 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:51.578497 kubelet[2501]: E0113 21:29:51.578059 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:51.592470 kubelet[2501]: I0113 21:29:51.592328 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.59230887 podStartE2EDuration="3.59230887s" podCreationTimestamp="2025-01-13 21:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:29:51.592280776 +0000 UTC m=+1.110105510" watchObservedRunningTime="2025-01-13 21:29:51.59230887 +0000 UTC m=+1.110133604" Jan 13 21:29:51.598492 kubelet[2501]: I0113 21:29:51.598416 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.598397208 podStartE2EDuration="1.598397208s" podCreationTimestamp="2025-01-13 21:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:29:51.59829903 +0000 UTC m=+1.116123764" watchObservedRunningTime="2025-01-13 21:29:51.598397208 +0000 UTC m=+1.116221942" Jan 13 21:29:51.609329 kubelet[2501]: I0113 21:29:51.609267 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.609236279 podStartE2EDuration="1.609236279s" podCreationTimestamp="2025-01-13 21:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:29:51.606481646 +0000 UTC m=+1.124306380" watchObservedRunningTime="2025-01-13 21:29:51.609236279 +0000 UTC m=+1.127061003" Jan 13 21:29:52.571018 kubelet[2501]: E0113 21:29:52.570969 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:52.571607 kubelet[2501]: E0113 21:29:52.571546 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:55.023271 kubelet[2501]: E0113 21:29:55.023216 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:55.283213 sudo[1648]: pam_unix(sudo:session): session closed for user root Jan 13 21:29:55.284922 sshd[1645]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:55.288728 systemd[1]: sshd@8-10.0.0.148:22-10.0.0.1:36632.service: Deactivated successfully. Jan 13 21:29:55.290740 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:29:55.290922 systemd[1]: session-9.scope: Consumed 4.220s CPU time, 156.6M memory peak, 0B memory swap peak. Jan 13 21:29:55.291359 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:29:55.292123 systemd-logind[1437]: Removed session 9. Jan 13 21:29:55.411114 kubelet[2501]: I0113 21:29:55.411061 2501 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:29:55.411367 containerd[1454]: time="2025-01-13T21:29:55.411334972Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:29:55.411777 kubelet[2501]: I0113 21:29:55.411468 2501 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:29:56.313108 systemd[1]: Created slice kubepods-besteffort-pod7568afb6_3f0e_492c_b310_12adb6413d68.slice - libcontainer container kubepods-besteffort-pod7568afb6_3f0e_492c_b310_12adb6413d68.slice. Jan 13 21:29:56.379558 kubelet[2501]: I0113 21:29:56.379499 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7568afb6-3f0e-492c-b310-12adb6413d68-xtables-lock\") pod \"kube-proxy-vmlql\" (UID: \"7568afb6-3f0e-492c-b310-12adb6413d68\") " pod="kube-system/kube-proxy-vmlql" Jan 13 21:29:56.379558 kubelet[2501]: I0113 21:29:56.379550 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7568afb6-3f0e-492c-b310-12adb6413d68-lib-modules\") pod \"kube-proxy-vmlql\" (UID: \"7568afb6-3f0e-492c-b310-12adb6413d68\") " pod="kube-system/kube-proxy-vmlql" Jan 13 21:29:56.379558 kubelet[2501]: I0113 21:29:56.379568 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7568afb6-3f0e-492c-b310-12adb6413d68-kube-proxy\") pod \"kube-proxy-vmlql\" (UID: \"7568afb6-3f0e-492c-b310-12adb6413d68\") " pod="kube-system/kube-proxy-vmlql" Jan 13 21:29:56.379993 kubelet[2501]: I0113 21:29:56.379590 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn8q5\" (UniqueName: \"kubernetes.io/projected/7568afb6-3f0e-492c-b310-12adb6413d68-kube-api-access-mn8q5\") pod \"kube-proxy-vmlql\" (UID: \"7568afb6-3f0e-492c-b310-12adb6413d68\") " pod="kube-system/kube-proxy-vmlql" Jan 13 21:29:56.527043 systemd[1]: Created slice kubepods-besteffort-pod424a908f_439a_4aff_9c57_e93e2c4c8c2b.slice - libcontainer container kubepods-besteffort-pod424a908f_439a_4aff_9c57_e93e2c4c8c2b.slice. Jan 13 21:29:56.581492 kubelet[2501]: I0113 21:29:56.581395 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/424a908f-439a-4aff-9c57-e93e2c4c8c2b-var-lib-calico\") pod \"tigera-operator-76c4976dd7-gc2lk\" (UID: \"424a908f-439a-4aff-9c57-e93e2c4c8c2b\") " pod="tigera-operator/tigera-operator-76c4976dd7-gc2lk" Jan 13 21:29:56.581492 kubelet[2501]: I0113 21:29:56.581432 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rblcg\" (UniqueName: \"kubernetes.io/projected/424a908f-439a-4aff-9c57-e93e2c4c8c2b-kube-api-access-rblcg\") pod \"tigera-operator-76c4976dd7-gc2lk\" (UID: \"424a908f-439a-4aff-9c57-e93e2c4c8c2b\") " pod="tigera-operator/tigera-operator-76c4976dd7-gc2lk" Jan 13 21:29:56.622883 kubelet[2501]: E0113 21:29:56.622836 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:56.623508 containerd[1454]: time="2025-01-13T21:29:56.623433023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vmlql,Uid:7568afb6-3f0e-492c-b310-12adb6413d68,Namespace:kube-system,Attempt:0,}" Jan 13 21:29:56.653410 containerd[1454]: time="2025-01-13T21:29:56.653168158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:56.653410 containerd[1454]: time="2025-01-13T21:29:56.653385152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:56.653544 containerd[1454]: time="2025-01-13T21:29:56.653414849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:56.653569 containerd[1454]: time="2025-01-13T21:29:56.653521161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:56.667383 systemd[1]: run-containerd-runc-k8s.io-cbeb21fd2b028936d620d474b153172a2fb479f8750750e85b376e3d6bdf39ae-runc.a3tcCa.mount: Deactivated successfully. Jan 13 21:29:56.681337 systemd[1]: Started cri-containerd-cbeb21fd2b028936d620d474b153172a2fb479f8750750e85b376e3d6bdf39ae.scope - libcontainer container cbeb21fd2b028936d620d474b153172a2fb479f8750750e85b376e3d6bdf39ae. Jan 13 21:29:56.703101 containerd[1454]: time="2025-01-13T21:29:56.703042687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vmlql,Uid:7568afb6-3f0e-492c-b310-12adb6413d68,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbeb21fd2b028936d620d474b153172a2fb479f8750750e85b376e3d6bdf39ae\"" Jan 13 21:29:56.703911 kubelet[2501]: E0113 21:29:56.703875 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:56.705822 containerd[1454]: time="2025-01-13T21:29:56.705793871Z" level=info msg="CreateContainer within sandbox \"cbeb21fd2b028936d620d474b153172a2fb479f8750750e85b376e3d6bdf39ae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:29:56.723470 containerd[1454]: time="2025-01-13T21:29:56.723407890Z" level=info msg="CreateContainer within sandbox \"cbeb21fd2b028936d620d474b153172a2fb479f8750750e85b376e3d6bdf39ae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7663aeec6ae7a1139f77241836808de51845008e5952da75e0d7a27cd05b19a9\"" Jan 13 21:29:56.723914 containerd[1454]: time="2025-01-13T21:29:56.723883626Z" level=info msg="StartContainer for \"7663aeec6ae7a1139f77241836808de51845008e5952da75e0d7a27cd05b19a9\"" Jan 13 21:29:56.757363 systemd[1]: Started cri-containerd-7663aeec6ae7a1139f77241836808de51845008e5952da75e0d7a27cd05b19a9.scope - libcontainer container 7663aeec6ae7a1139f77241836808de51845008e5952da75e0d7a27cd05b19a9. Jan 13 21:29:56.786945 containerd[1454]: time="2025-01-13T21:29:56.786888515Z" level=info msg="StartContainer for \"7663aeec6ae7a1139f77241836808de51845008e5952da75e0d7a27cd05b19a9\" returns successfully" Jan 13 21:29:56.830763 containerd[1454]: time="2025-01-13T21:29:56.830707408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-gc2lk,Uid:424a908f-439a-4aff-9c57-e93e2c4c8c2b,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:29:56.880837 containerd[1454]: time="2025-01-13T21:29:56.880589932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:56.880837 containerd[1454]: time="2025-01-13T21:29:56.880716443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:56.882548 containerd[1454]: time="2025-01-13T21:29:56.881592974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:56.882548 containerd[1454]: time="2025-01-13T21:29:56.882004278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:56.901359 systemd[1]: Started cri-containerd-38b4915ae2dd06542dd34ed3d46c148066bae47d79797f88568c1e397d9963f0.scope - libcontainer container 38b4915ae2dd06542dd34ed3d46c148066bae47d79797f88568c1e397d9963f0. Jan 13 21:29:56.939288 containerd[1454]: time="2025-01-13T21:29:56.939249708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-gc2lk,Uid:424a908f-439a-4aff-9c57-e93e2c4c8c2b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"38b4915ae2dd06542dd34ed3d46c148066bae47d79797f88568c1e397d9963f0\"" Jan 13 21:29:56.940766 containerd[1454]: time="2025-01-13T21:29:56.940749206Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:29:57.226470 kubelet[2501]: E0113 21:29:57.226356 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:57.579174 kubelet[2501]: E0113 21:29:57.579149 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:57.579538 kubelet[2501]: E0113 21:29:57.579248 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:57.593164 kubelet[2501]: I0113 21:29:57.593102 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vmlql" podStartSLOduration=1.593046019 podStartE2EDuration="1.593046019s" podCreationTimestamp="2025-01-13 21:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:29:57.593043675 +0000 UTC m=+7.110868409" watchObservedRunningTime="2025-01-13 21:29:57.593046019 +0000 UTC m=+7.110870753" Jan 13 21:29:58.040475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393728585.mount: Deactivated successfully. Jan 13 21:29:59.469899 containerd[1454]: time="2025-01-13T21:29:59.469833034Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:59.470631 containerd[1454]: time="2025-01-13T21:29:59.470578942Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764273" Jan 13 21:29:59.471929 containerd[1454]: time="2025-01-13T21:29:59.471884574Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:59.474009 containerd[1454]: time="2025-01-13T21:29:59.473978804Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:59.474689 containerd[1454]: time="2025-01-13T21:29:59.474655802Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.533817325s" Jan 13 21:29:59.474722 containerd[1454]: time="2025-01-13T21:29:59.474687501Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 21:29:59.476849 containerd[1454]: time="2025-01-13T21:29:59.476808544Z" level=info msg="CreateContainer within sandbox \"38b4915ae2dd06542dd34ed3d46c148066bae47d79797f88568c1e397d9963f0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:29:59.488404 containerd[1454]: time="2025-01-13T21:29:59.488354601Z" level=info msg="CreateContainer within sandbox \"38b4915ae2dd06542dd34ed3d46c148066bae47d79797f88568c1e397d9963f0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3acd7ff5251891f0e8e77fe66b4ad28f1d8d1d5ad0f1d83a104fbc0a29d88a9c\"" Jan 13 21:29:59.488701 containerd[1454]: time="2025-01-13T21:29:59.488680440Z" level=info msg="StartContainer for \"3acd7ff5251891f0e8e77fe66b4ad28f1d8d1d5ad0f1d83a104fbc0a29d88a9c\"" Jan 13 21:29:59.518378 systemd[1]: Started cri-containerd-3acd7ff5251891f0e8e77fe66b4ad28f1d8d1d5ad0f1d83a104fbc0a29d88a9c.scope - libcontainer container 3acd7ff5251891f0e8e77fe66b4ad28f1d8d1d5ad0f1d83a104fbc0a29d88a9c. Jan 13 21:29:59.600148 containerd[1454]: time="2025-01-13T21:29:59.600104014Z" level=info msg="StartContainer for \"3acd7ff5251891f0e8e77fe66b4ad28f1d8d1d5ad0f1d83a104fbc0a29d88a9c\" returns successfully" Jan 13 21:30:00.610106 kubelet[2501]: I0113 21:30:00.609619 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-gc2lk" podStartSLOduration=2.074589451 podStartE2EDuration="4.609605277s" podCreationTimestamp="2025-01-13 21:29:56 +0000 UTC" firstStartedPulling="2025-01-13 21:29:56.940407064 +0000 UTC m=+6.458231798" lastFinishedPulling="2025-01-13 21:29:59.475422889 +0000 UTC m=+8.993247624" observedRunningTime="2025-01-13 21:30:00.609553137 +0000 UTC m=+10.127377871" watchObservedRunningTime="2025-01-13 21:30:00.609605277 +0000 UTC m=+10.127430011" Jan 13 21:30:01.460724 kubelet[2501]: E0113 21:30:01.460658 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:01.815836 update_engine[1442]: I20250113 21:30:01.815766 1442 update_attempter.cc:509] Updating boot flags... Jan 13 21:30:01.848236 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2891) Jan 13 21:30:01.883469 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2894) Jan 13 21:30:01.929227 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2894) Jan 13 21:30:02.626759 systemd[1]: Created slice kubepods-besteffort-pod22e9f4a0_75ad_4eda_b7ff_dd94434987fa.slice - libcontainer container kubepods-besteffort-pod22e9f4a0_75ad_4eda_b7ff_dd94434987fa.slice. Jan 13 21:30:02.636075 kubelet[2501]: I0113 21:30:02.636023 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a67bf09b-881b-40a5-b7c0-6afe944f2328-cni-bin-dir\") pod \"calico-node-bfqqr\" (UID: \"a67bf09b-881b-40a5-b7c0-6afe944f2328\") " pod="calico-system/calico-node-bfqqr" Jan 13 21:30:02.636768 kubelet[2501]: I0113 21:30:02.636109 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a67bf09b-881b-40a5-b7c0-6afe944f2328-xtables-lock\") pod \"calico-node-bfqqr\" (UID: \"a67bf09b-881b-40a5-b7c0-6afe944f2328\") " pod="calico-system/calico-node-bfqqr" Jan 13 21:30:02.636768 kubelet[2501]: I0113 21:30:02.636128 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a67bf09b-881b-40a5-b7c0-6afe944f2328-var-lib-calico\") pod \"calico-node-bfqqr\" (UID: \"a67bf09b-881b-40a5-b7c0-6afe944f2328\") " pod="calico-system/calico-node-bfqqr" Jan 13 21:30:02.636768 kubelet[2501]: I0113 21:30:02.636251 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22e9f4a0-75ad-4eda-b7ff-dd94434987fa-tigera-ca-bundle\") pod \"calico-typha-85d9ff5964-pqn7g\" (UID: \"22e9f4a0-75ad-4eda-b7ff-dd94434987fa\") " pod="calico-system/calico-typha-85d9ff5964-pqn7g" Jan 13 21:30:02.636768 kubelet[2501]: I0113 21:30:02.636269 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/22e9f4a0-75ad-4eda-b7ff-dd94434987fa-typha-certs\") pod \"calico-typha-85d9ff5964-pqn7g\" (UID: \"22e9f4a0-75ad-4eda-b7ff-dd94434987fa\") " pod="calico-system/calico-typha-85d9ff5964-pqn7g" Jan 13 21:30:02.636768 kubelet[2501]: I0113 21:30:02.636285 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a67bf09b-881b-40a5-b7c0-6afe944f2328-node-certs\") pod \"calico-node-bfqqr\" (UID: \"a67bf09b-881b-40a5-b7c0-6afe944f2328\") " pod="calico-system/calico-node-bfqqr" Jan 13 21:30:02.636892 kubelet[2501]: I0113 21:30:02.636330 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a67bf09b-881b-40a5-b7c0-6afe944f2328-cni-log-dir\") pod \"calico-node-bfqqr\" (UID: \"a67bf09b-881b-40a5-b7c0-6afe944f2328\") " pod="calico-system/calico-node-bfqqr" Jan 13 21:30:02.636892 kubelet[2501]: I0113 21:30:02.636400 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm2tt\" (UniqueName: \"kubernetes.io/projected/22e9f4a0-75ad-4eda-b7ff-dd94434987fa-kube-api-access-mm2tt\") pod \"calico-typha-85d9ff5964-pqn7g\" (UID: \"22e9f4a0-75ad-4eda-b7ff-dd94434987fa\") " pod="calico-system/calico-typha-85d9ff5964-pqn7g" Jan 13 21:30:02.636892 kubelet[2501]: I0113 21:30:02.636416 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a67bf09b-881b-40a5-b7c0-6afe944f2328-policysync\") pod \"calico-node-bfqqr\" (UID: \"a67bf09b-881b-40a5-b7c0-6afe944f2328\") " pod="calico-system/calico-node-bfqqr" Jan 13 21:30:02.636892 kubelet[2501]: I0113 21:30:02.636429 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a67bf09b-881b-40a5-b7c0-6afe944f2328-lib-modules\") pod \"calico-node-bfqqr\" (UID: \"a67bf09b-881b-40a5-b7c0-6afe944f2328\") " pod="calico-system/calico-node-bfqqr" Jan 13 21:30:02.636892 kubelet[2501]: I0113 21:30:02.636479 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a67bf09b-881b-40a5-b7c0-6afe944f2328-cni-net-dir\") pod \"calico-node-bfqqr\" (UID: \"a67bf09b-881b-40a5-b7c0-6afe944f2328\") " pod="calico-system/calico-node-bfqqr" Jan 13 21:30:02.637009 kubelet[2501]: I0113 21:30:02.636497 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a67bf09b-881b-40a5-b7c0-6afe944f2328-tigera-ca-bundle\") pod \"calico-node-bfqqr\" (UID: \"a67bf09b-881b-40a5-b7c0-6afe944f2328\") " pod="calico-system/calico-node-bfqqr" Jan 13 21:30:02.637009 kubelet[2501]: I0113 21:30:02.636510 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a67bf09b-881b-40a5-b7c0-6afe944f2328-var-run-calico\") pod \"calico-node-bfqqr\" (UID: \"a67bf09b-881b-40a5-b7c0-6afe944f2328\") " pod="calico-system/calico-node-bfqqr" Jan 13 21:30:02.637009 kubelet[2501]: I0113 21:30:02.636553 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a67bf09b-881b-40a5-b7c0-6afe944f2328-flexvol-driver-host\") pod \"calico-node-bfqqr\" (UID: \"a67bf09b-881b-40a5-b7c0-6afe944f2328\") " pod="calico-system/calico-node-bfqqr" Jan 13 21:30:02.637009 kubelet[2501]: I0113 21:30:02.636577 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96x48\" (UniqueName: \"kubernetes.io/projected/a67bf09b-881b-40a5-b7c0-6afe944f2328-kube-api-access-96x48\") pod \"calico-node-bfqqr\" (UID: \"a67bf09b-881b-40a5-b7c0-6afe944f2328\") " pod="calico-system/calico-node-bfqqr" Jan 13 21:30:02.637128 systemd[1]: Created slice kubepods-besteffort-poda67bf09b_881b_40a5_b7c0_6afe944f2328.slice - libcontainer container kubepods-besteffort-poda67bf09b_881b_40a5_b7c0_6afe944f2328.slice. Jan 13 21:30:02.723337 kubelet[2501]: E0113 21:30:02.722666 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8rhh" podUID="e11df133-a251-4390-b19c-decc83ce2384" Jan 13 21:30:02.750327 kubelet[2501]: E0113 21:30:02.750272 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.750327 kubelet[2501]: W0113 21:30:02.750304 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.750327 kubelet[2501]: E0113 21:30:02.750337 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.753734 kubelet[2501]: E0113 21:30:02.752828 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.753734 kubelet[2501]: W0113 21:30:02.752844 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.753734 kubelet[2501]: E0113 21:30:02.752860 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.753734 kubelet[2501]: E0113 21:30:02.753051 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.753734 kubelet[2501]: W0113 21:30:02.753058 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.753734 kubelet[2501]: E0113 21:30:02.753088 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.756718 kubelet[2501]: E0113 21:30:02.755404 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.756718 kubelet[2501]: W0113 21:30:02.755425 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.756718 kubelet[2501]: E0113 21:30:02.755446 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.821378 kubelet[2501]: E0113 21:30:02.821338 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.821378 kubelet[2501]: W0113 21:30:02.821360 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.821378 kubelet[2501]: E0113 21:30:02.821378 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.821666 kubelet[2501]: E0113 21:30:02.821641 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.821695 kubelet[2501]: W0113 21:30:02.821663 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.821695 kubelet[2501]: E0113 21:30:02.821688 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.821960 kubelet[2501]: E0113 21:30:02.821935 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.821960 kubelet[2501]: W0113 21:30:02.821946 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.821960 kubelet[2501]: E0113 21:30:02.821954 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.822181 kubelet[2501]: E0113 21:30:02.822167 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.822181 kubelet[2501]: W0113 21:30:02.822177 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.822265 kubelet[2501]: E0113 21:30:02.822186 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.822422 kubelet[2501]: E0113 21:30:02.822407 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.822422 kubelet[2501]: W0113 21:30:02.822418 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.822474 kubelet[2501]: E0113 21:30:02.822426 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.822649 kubelet[2501]: E0113 21:30:02.822635 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.822649 kubelet[2501]: W0113 21:30:02.822646 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.822690 kubelet[2501]: E0113 21:30:02.822653 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.822836 kubelet[2501]: E0113 21:30:02.822822 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.822836 kubelet[2501]: W0113 21:30:02.822832 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.822880 kubelet[2501]: E0113 21:30:02.822840 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.823013 kubelet[2501]: E0113 21:30:02.822999 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.823035 kubelet[2501]: W0113 21:30:02.823009 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.823035 kubelet[2501]: E0113 21:30:02.823025 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.823211 kubelet[2501]: E0113 21:30:02.823181 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.823211 kubelet[2501]: W0113 21:30:02.823191 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.823263 kubelet[2501]: E0113 21:30:02.823212 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.823398 kubelet[2501]: E0113 21:30:02.823383 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.823398 kubelet[2501]: W0113 21:30:02.823394 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.823460 kubelet[2501]: E0113 21:30:02.823402 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.823564 kubelet[2501]: E0113 21:30:02.823551 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.823564 kubelet[2501]: W0113 21:30:02.823560 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.823612 kubelet[2501]: E0113 21:30:02.823567 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.823725 kubelet[2501]: E0113 21:30:02.823712 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.823725 kubelet[2501]: W0113 21:30:02.823721 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.823778 kubelet[2501]: E0113 21:30:02.823728 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.823886 kubelet[2501]: E0113 21:30:02.823872 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.823886 kubelet[2501]: W0113 21:30:02.823881 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.823927 kubelet[2501]: E0113 21:30:02.823888 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.824076 kubelet[2501]: E0113 21:30:02.824062 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.824076 kubelet[2501]: W0113 21:30:02.824072 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.824139 kubelet[2501]: E0113 21:30:02.824080 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.824343 kubelet[2501]: E0113 21:30:02.824309 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.824343 kubelet[2501]: W0113 21:30:02.824334 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.824505 kubelet[2501]: E0113 21:30:02.824361 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.824612 kubelet[2501]: E0113 21:30:02.824599 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.824612 kubelet[2501]: W0113 21:30:02.824609 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.824658 kubelet[2501]: E0113 21:30:02.824619 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.824821 kubelet[2501]: E0113 21:30:02.824808 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.824821 kubelet[2501]: W0113 21:30:02.824818 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.824871 kubelet[2501]: E0113 21:30:02.824826 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.825017 kubelet[2501]: E0113 21:30:02.825004 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.825017 kubelet[2501]: W0113 21:30:02.825013 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.825069 kubelet[2501]: E0113 21:30:02.825021 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.825219 kubelet[2501]: E0113 21:30:02.825190 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.825219 kubelet[2501]: W0113 21:30:02.825215 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.825271 kubelet[2501]: E0113 21:30:02.825223 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.825417 kubelet[2501]: E0113 21:30:02.825399 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.825417 kubelet[2501]: W0113 21:30:02.825409 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.825417 kubelet[2501]: E0113 21:30:02.825416 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.838732 kubelet[2501]: E0113 21:30:02.838708 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.838732 kubelet[2501]: W0113 21:30:02.838722 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.838732 kubelet[2501]: E0113 21:30:02.838732 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.838818 kubelet[2501]: I0113 21:30:02.838755 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e11df133-a251-4390-b19c-decc83ce2384-registration-dir\") pod \"csi-node-driver-s8rhh\" (UID: \"e11df133-a251-4390-b19c-decc83ce2384\") " pod="calico-system/csi-node-driver-s8rhh" Jan 13 21:30:02.838969 kubelet[2501]: E0113 21:30:02.838947 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.838969 kubelet[2501]: W0113 21:30:02.838961 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.839025 kubelet[2501]: E0113 21:30:02.838973 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.839025 kubelet[2501]: I0113 21:30:02.838987 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e11df133-a251-4390-b19c-decc83ce2384-kubelet-dir\") pod \"csi-node-driver-s8rhh\" (UID: \"e11df133-a251-4390-b19c-decc83ce2384\") " pod="calico-system/csi-node-driver-s8rhh" Jan 13 21:30:02.839262 kubelet[2501]: E0113 21:30:02.839237 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.839262 kubelet[2501]: W0113 21:30:02.839253 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.839334 kubelet[2501]: E0113 21:30:02.839268 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.839467 kubelet[2501]: E0113 21:30:02.839452 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.839467 kubelet[2501]: W0113 21:30:02.839463 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.839515 kubelet[2501]: E0113 21:30:02.839476 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.839709 kubelet[2501]: E0113 21:30:02.839690 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.839709 kubelet[2501]: W0113 21:30:02.839707 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.839759 kubelet[2501]: E0113 21:30:02.839724 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.839759 kubelet[2501]: I0113 21:30:02.839755 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e11df133-a251-4390-b19c-decc83ce2384-socket-dir\") pod \"csi-node-driver-s8rhh\" (UID: \"e11df133-a251-4390-b19c-decc83ce2384\") " pod="calico-system/csi-node-driver-s8rhh" Jan 13 21:30:02.839960 kubelet[2501]: E0113 21:30:02.839945 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.839960 kubelet[2501]: W0113 21:30:02.839956 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.840012 kubelet[2501]: E0113 21:30:02.839969 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.840012 kubelet[2501]: I0113 21:30:02.839983 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e11df133-a251-4390-b19c-decc83ce2384-varrun\") pod \"csi-node-driver-s8rhh\" (UID: \"e11df133-a251-4390-b19c-decc83ce2384\") " pod="calico-system/csi-node-driver-s8rhh" Jan 13 21:30:02.840224 kubelet[2501]: E0113 21:30:02.840186 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.840224 kubelet[2501]: W0113 21:30:02.840215 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.840276 kubelet[2501]: E0113 21:30:02.840242 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.840276 kubelet[2501]: I0113 21:30:02.840267 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krzmn\" (UniqueName: \"kubernetes.io/projected/e11df133-a251-4390-b19c-decc83ce2384-kube-api-access-krzmn\") pod \"csi-node-driver-s8rhh\" (UID: \"e11df133-a251-4390-b19c-decc83ce2384\") " pod="calico-system/csi-node-driver-s8rhh" Jan 13 21:30:02.840444 kubelet[2501]: E0113 21:30:02.840429 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.840444 kubelet[2501]: W0113 21:30:02.840440 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.840484 kubelet[2501]: E0113 21:30:02.840472 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.840649 kubelet[2501]: E0113 21:30:02.840636 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.840649 kubelet[2501]: W0113 21:30:02.840646 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.840702 kubelet[2501]: E0113 21:30:02.840660 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.840852 kubelet[2501]: E0113 21:30:02.840839 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.840852 kubelet[2501]: W0113 21:30:02.840849 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.840906 kubelet[2501]: E0113 21:30:02.840861 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.841072 kubelet[2501]: E0113 21:30:02.841057 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.841072 kubelet[2501]: W0113 21:30:02.841069 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.841126 kubelet[2501]: E0113 21:30:02.841082 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.841266 kubelet[2501]: E0113 21:30:02.841251 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.841266 kubelet[2501]: W0113 21:30:02.841261 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.841339 kubelet[2501]: E0113 21:30:02.841269 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.841471 kubelet[2501]: E0113 21:30:02.841457 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.841471 kubelet[2501]: W0113 21:30:02.841467 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.841516 kubelet[2501]: E0113 21:30:02.841475 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.841667 kubelet[2501]: E0113 21:30:02.841654 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.841667 kubelet[2501]: W0113 21:30:02.841664 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.841720 kubelet[2501]: E0113 21:30:02.841671 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.841854 kubelet[2501]: E0113 21:30:02.841840 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.841854 kubelet[2501]: W0113 21:30:02.841850 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.841903 kubelet[2501]: E0113 21:30:02.841857 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.934275 kubelet[2501]: E0113 21:30:02.933624 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:02.934497 containerd[1454]: time="2025-01-13T21:30:02.934151010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85d9ff5964-pqn7g,Uid:22e9f4a0-75ad-4eda-b7ff-dd94434987fa,Namespace:calico-system,Attempt:0,}" Jan 13 21:30:02.940337 kubelet[2501]: E0113 21:30:02.940308 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:02.940755 containerd[1454]: time="2025-01-13T21:30:02.940713845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bfqqr,Uid:a67bf09b-881b-40a5-b7c0-6afe944f2328,Namespace:calico-system,Attempt:0,}" Jan 13 21:30:02.940807 kubelet[2501]: E0113 21:30:02.940756 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.940807 kubelet[2501]: W0113 21:30:02.940768 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.940807 kubelet[2501]: E0113 21:30:02.940781 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.941069 kubelet[2501]: E0113 21:30:02.941042 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.941069 kubelet[2501]: W0113 21:30:02.941065 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.941277 kubelet[2501]: E0113 21:30:02.941101 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.941395 kubelet[2501]: E0113 21:30:02.941383 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.941421 kubelet[2501]: W0113 21:30:02.941396 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.941421 kubelet[2501]: E0113 21:30:02.941411 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.941614 kubelet[2501]: E0113 21:30:02.941603 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.941614 kubelet[2501]: W0113 21:30:02.941612 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.941664 kubelet[2501]: E0113 21:30:02.941623 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.941817 kubelet[2501]: E0113 21:30:02.941798 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.941817 kubelet[2501]: W0113 21:30:02.941811 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.941902 kubelet[2501]: E0113 21:30:02.941827 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.942113 kubelet[2501]: E0113 21:30:02.942081 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.942113 kubelet[2501]: W0113 21:30:02.942105 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.942158 kubelet[2501]: E0113 21:30:02.942134 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.942408 kubelet[2501]: E0113 21:30:02.942388 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.942408 kubelet[2501]: W0113 21:30:02.942401 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.942468 kubelet[2501]: E0113 21:30:02.942434 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.942610 kubelet[2501]: E0113 21:30:02.942596 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.942610 kubelet[2501]: W0113 21:30:02.942606 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.942659 kubelet[2501]: E0113 21:30:02.942633 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.942833 kubelet[2501]: E0113 21:30:02.942812 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.942833 kubelet[2501]: W0113 21:30:02.942824 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.942886 kubelet[2501]: E0113 21:30:02.942837 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.943010 kubelet[2501]: E0113 21:30:02.942996 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.943010 kubelet[2501]: W0113 21:30:02.943007 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.943057 kubelet[2501]: E0113 21:30:02.943022 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.943243 kubelet[2501]: E0113 21:30:02.943223 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.943243 kubelet[2501]: W0113 21:30:02.943234 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.943243 kubelet[2501]: E0113 21:30:02.943246 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.943491 kubelet[2501]: E0113 21:30:02.943468 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.943491 kubelet[2501]: W0113 21:30:02.943485 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.943618 kubelet[2501]: E0113 21:30:02.943509 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.943758 kubelet[2501]: E0113 21:30:02.943738 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.943758 kubelet[2501]: W0113 21:30:02.943751 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.943824 kubelet[2501]: E0113 21:30:02.943769 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.944029 kubelet[2501]: E0113 21:30:02.944008 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.944029 kubelet[2501]: W0113 21:30:02.944019 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.944094 kubelet[2501]: E0113 21:30:02.944047 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.944253 kubelet[2501]: E0113 21:30:02.944240 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.944253 kubelet[2501]: W0113 21:30:02.944250 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.944316 kubelet[2501]: E0113 21:30:02.944276 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.944465 kubelet[2501]: E0113 21:30:02.944451 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.944465 kubelet[2501]: W0113 21:30:02.944461 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.944513 kubelet[2501]: E0113 21:30:02.944486 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.944670 kubelet[2501]: E0113 21:30:02.944657 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.944670 kubelet[2501]: W0113 21:30:02.944667 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.944714 kubelet[2501]: E0113 21:30:02.944680 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.944893 kubelet[2501]: E0113 21:30:02.944878 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.944893 kubelet[2501]: W0113 21:30:02.944890 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.944935 kubelet[2501]: E0113 21:30:02.944904 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.945089 kubelet[2501]: E0113 21:30:02.945075 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.945089 kubelet[2501]: W0113 21:30:02.945085 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.945130 kubelet[2501]: E0113 21:30:02.945096 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.945316 kubelet[2501]: E0113 21:30:02.945300 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.945316 kubelet[2501]: W0113 21:30:02.945313 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.945371 kubelet[2501]: E0113 21:30:02.945327 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.945526 kubelet[2501]: E0113 21:30:02.945512 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.945526 kubelet[2501]: W0113 21:30:02.945523 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.945572 kubelet[2501]: E0113 21:30:02.945534 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.945790 kubelet[2501]: E0113 21:30:02.945774 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.945790 kubelet[2501]: W0113 21:30:02.945786 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.945840 kubelet[2501]: E0113 21:30:02.945798 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.946019 kubelet[2501]: E0113 21:30:02.946005 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.946019 kubelet[2501]: W0113 21:30:02.946015 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.946069 kubelet[2501]: E0113 21:30:02.946028 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.946266 kubelet[2501]: E0113 21:30:02.946251 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.946266 kubelet[2501]: W0113 21:30:02.946263 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.946326 kubelet[2501]: E0113 21:30:02.946278 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:02.946495 kubelet[2501]: E0113 21:30:02.946480 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:02.946495 kubelet[2501]: W0113 21:30:02.946491 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:02.946539 kubelet[2501]: E0113 21:30:02.946500 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:03.044530 kubelet[2501]: E0113 21:30:03.044495 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:03.044530 kubelet[2501]: W0113 21:30:03.044508 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:03.044530 kubelet[2501]: E0113 21:30:03.044518 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:03.060358 kubelet[2501]: E0113 21:30:03.060319 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:03.060358 kubelet[2501]: W0113 21:30:03.060339 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:03.060358 kubelet[2501]: E0113 21:30:03.060357 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:03.093581 containerd[1454]: time="2025-01-13T21:30:03.092628775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:03.093581 containerd[1454]: time="2025-01-13T21:30:03.092683469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:03.093581 containerd[1454]: time="2025-01-13T21:30:03.092726641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:03.093581 containerd[1454]: time="2025-01-13T21:30:03.092831159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:03.097863 containerd[1454]: time="2025-01-13T21:30:03.097633360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:03.097863 containerd[1454]: time="2025-01-13T21:30:03.097702440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:03.097863 containerd[1454]: time="2025-01-13T21:30:03.097720796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:03.097994 containerd[1454]: time="2025-01-13T21:30:03.097879296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:03.121427 systemd[1]: Started cri-containerd-ad94401fdea24be6b87c663f169b7a33ed318d832137ad971a7956a2df3dda20.scope - libcontainer container ad94401fdea24be6b87c663f169b7a33ed318d832137ad971a7956a2df3dda20. Jan 13 21:30:03.126211 systemd[1]: Started cri-containerd-295e58ca0c46180f07b7fe6983c5f128911559a44309e255956fa2f1a497830c.scope - libcontainer container 295e58ca0c46180f07b7fe6983c5f128911559a44309e255956fa2f1a497830c. Jan 13 21:30:03.149189 containerd[1454]: time="2025-01-13T21:30:03.149085324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bfqqr,Uid:a67bf09b-881b-40a5-b7c0-6afe944f2328,Namespace:calico-system,Attempt:0,} returns sandbox id \"295e58ca0c46180f07b7fe6983c5f128911559a44309e255956fa2f1a497830c\"" Jan 13 21:30:03.150157 kubelet[2501]: E0113 21:30:03.150134 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:03.151947 containerd[1454]: time="2025-01-13T21:30:03.151824506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:30:03.162111 containerd[1454]: time="2025-01-13T21:30:03.162083798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85d9ff5964-pqn7g,Uid:22e9f4a0-75ad-4eda-b7ff-dd94434987fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad94401fdea24be6b87c663f169b7a33ed318d832137ad971a7956a2df3dda20\"" Jan 13 21:30:03.162738 kubelet[2501]: E0113 21:30:03.162699 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:04.556389 kubelet[2501]: E0113 21:30:04.556346 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8rhh" podUID="e11df133-a251-4390-b19c-decc83ce2384" Jan 13 21:30:05.026692 kubelet[2501]: E0113 21:30:05.026665 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:05.041248 kubelet[2501]: E0113 21:30:05.041219 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.041248 kubelet[2501]: W0113 21:30:05.041239 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.041248 kubelet[2501]: E0113 21:30:05.041256 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.041538 kubelet[2501]: E0113 21:30:05.041519 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.041538 kubelet[2501]: W0113 21:30:05.041533 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.041801 kubelet[2501]: E0113 21:30:05.041542 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.041801 kubelet[2501]: E0113 21:30:05.041760 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.041801 kubelet[2501]: W0113 21:30:05.041768 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.041801 kubelet[2501]: E0113 21:30:05.041777 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.042284 kubelet[2501]: E0113 21:30:05.041995 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.042284 kubelet[2501]: W0113 21:30:05.042006 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.042284 kubelet[2501]: E0113 21:30:05.042014 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.042584 kubelet[2501]: E0113 21:30:05.042563 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.042584 kubelet[2501]: W0113 21:30:05.042580 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.042661 kubelet[2501]: E0113 21:30:05.042592 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.042843 kubelet[2501]: E0113 21:30:05.042823 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.042843 kubelet[2501]: W0113 21:30:05.042837 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.042925 kubelet[2501]: E0113 21:30:05.042848 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.043119 kubelet[2501]: E0113 21:30:05.043104 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.043119 kubelet[2501]: W0113 21:30:05.043117 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.043213 kubelet[2501]: E0113 21:30:05.043132 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.043441 kubelet[2501]: E0113 21:30:05.043424 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.043441 kubelet[2501]: W0113 21:30:05.043439 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.043503 kubelet[2501]: E0113 21:30:05.043450 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.043815 kubelet[2501]: E0113 21:30:05.043796 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.043815 kubelet[2501]: W0113 21:30:05.043809 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.043914 kubelet[2501]: E0113 21:30:05.043831 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.044228 kubelet[2501]: E0113 21:30:05.044080 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.044228 kubelet[2501]: W0113 21:30:05.044094 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.044228 kubelet[2501]: E0113 21:30:05.044106 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.044363 kubelet[2501]: E0113 21:30:05.044347 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.044363 kubelet[2501]: W0113 21:30:05.044360 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.044438 kubelet[2501]: E0113 21:30:05.044371 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.044595 kubelet[2501]: E0113 21:30:05.044562 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.044595 kubelet[2501]: W0113 21:30:05.044583 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.044595 kubelet[2501]: E0113 21:30:05.044595 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.044815 kubelet[2501]: E0113 21:30:05.044791 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.044815 kubelet[2501]: W0113 21:30:05.044805 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.044815 kubelet[2501]: E0113 21:30:05.044816 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.045009 kubelet[2501]: E0113 21:30:05.044994 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.045009 kubelet[2501]: W0113 21:30:05.045006 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.045056 kubelet[2501]: E0113 21:30:05.045016 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.045215 kubelet[2501]: E0113 21:30:05.045183 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.045215 kubelet[2501]: W0113 21:30:05.045213 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.045283 kubelet[2501]: E0113 21:30:05.045224 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.045418 kubelet[2501]: E0113 21:30:05.045403 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.045418 kubelet[2501]: W0113 21:30:05.045415 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.045474 kubelet[2501]: E0113 21:30:05.045426 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.045648 kubelet[2501]: E0113 21:30:05.045633 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.045648 kubelet[2501]: W0113 21:30:05.045646 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.045702 kubelet[2501]: E0113 21:30:05.045655 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.045835 kubelet[2501]: E0113 21:30:05.045819 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.045835 kubelet[2501]: W0113 21:30:05.045832 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.045887 kubelet[2501]: E0113 21:30:05.045842 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.046053 kubelet[2501]: E0113 21:30:05.046037 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.046053 kubelet[2501]: W0113 21:30:05.046050 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.046120 kubelet[2501]: E0113 21:30:05.046060 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.046288 kubelet[2501]: E0113 21:30:05.046266 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.046288 kubelet[2501]: W0113 21:30:05.046286 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.046343 kubelet[2501]: E0113 21:30:05.046297 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.046476 kubelet[2501]: E0113 21:30:05.046463 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.046476 kubelet[2501]: W0113 21:30:05.046472 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.046533 kubelet[2501]: E0113 21:30:05.046480 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.046638 kubelet[2501]: E0113 21:30:05.046625 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.046638 kubelet[2501]: W0113 21:30:05.046634 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.046686 kubelet[2501]: E0113 21:30:05.046642 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.046793 kubelet[2501]: E0113 21:30:05.046779 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.046793 kubelet[2501]: W0113 21:30:05.046789 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.046853 kubelet[2501]: E0113 21:30:05.046797 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.046948 kubelet[2501]: E0113 21:30:05.046935 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.046948 kubelet[2501]: W0113 21:30:05.046944 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.046997 kubelet[2501]: E0113 21:30:05.046952 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.047113 kubelet[2501]: E0113 21:30:05.047099 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:05.047113 kubelet[2501]: W0113 21:30:05.047109 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:05.047168 kubelet[2501]: E0113 21:30:05.047116 2501 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:05.702135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4095612894.mount: Deactivated successfully. Jan 13 21:30:05.767796 containerd[1454]: time="2025-01-13T21:30:05.767720008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:05.768667 containerd[1454]: time="2025-01-13T21:30:05.768596487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 21:30:05.769960 containerd[1454]: time="2025-01-13T21:30:05.769918339Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:05.772286 containerd[1454]: time="2025-01-13T21:30:05.772235183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:05.772842 containerd[1454]: time="2025-01-13T21:30:05.772797147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.620494064s" Jan 13 21:30:05.772890 containerd[1454]: time="2025-01-13T21:30:05.772839447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:30:05.773733 containerd[1454]: time="2025-01-13T21:30:05.773694495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:30:05.774796 containerd[1454]: time="2025-01-13T21:30:05.774764771Z" level=info msg="CreateContainer within sandbox \"295e58ca0c46180f07b7fe6983c5f128911559a44309e255956fa2f1a497830c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:30:05.790237 containerd[1454]: time="2025-01-13T21:30:05.790184837Z" level=info msg="CreateContainer within sandbox \"295e58ca0c46180f07b7fe6983c5f128911559a44309e255956fa2f1a497830c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"46e17b27251f17d6e691615e2353cb1ac3e8d77e68bc059739f0676df1ba4cff\"" Jan 13 21:30:05.790671 containerd[1454]: time="2025-01-13T21:30:05.790640920Z" level=info msg="StartContainer for \"46e17b27251f17d6e691615e2353cb1ac3e8d77e68bc059739f0676df1ba4cff\"" Jan 13 21:30:05.829333 systemd[1]: Started cri-containerd-46e17b27251f17d6e691615e2353cb1ac3e8d77e68bc059739f0676df1ba4cff.scope - libcontainer container 46e17b27251f17d6e691615e2353cb1ac3e8d77e68bc059739f0676df1ba4cff. Jan 13 21:30:05.859408 containerd[1454]: time="2025-01-13T21:30:05.859356617Z" level=info msg="StartContainer for \"46e17b27251f17d6e691615e2353cb1ac3e8d77e68bc059739f0676df1ba4cff\" returns successfully" Jan 13 21:30:05.872624 systemd[1]: cri-containerd-46e17b27251f17d6e691615e2353cb1ac3e8d77e68bc059739f0676df1ba4cff.scope: Deactivated successfully. Jan 13 21:30:05.955312 containerd[1454]: time="2025-01-13T21:30:05.952119459Z" level=info msg="shim disconnected" id=46e17b27251f17d6e691615e2353cb1ac3e8d77e68bc059739f0676df1ba4cff namespace=k8s.io Jan 13 21:30:05.955312 containerd[1454]: time="2025-01-13T21:30:05.955134715Z" level=warning msg="cleaning up after shim disconnected" id=46e17b27251f17d6e691615e2353cb1ac3e8d77e68bc059739f0676df1ba4cff namespace=k8s.io Jan 13 21:30:05.955312 containerd[1454]: time="2025-01-13T21:30:05.955145926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:06.557310 kubelet[2501]: E0113 21:30:06.557175 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8rhh" podUID="e11df133-a251-4390-b19c-decc83ce2384" Jan 13 21:30:06.617800 kubelet[2501]: E0113 21:30:06.617763 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:06.683117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46e17b27251f17d6e691615e2353cb1ac3e8d77e68bc059739f0676df1ba4cff-rootfs.mount: Deactivated successfully. Jan 13 21:30:08.557005 kubelet[2501]: E0113 21:30:08.556829 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8rhh" podUID="e11df133-a251-4390-b19c-decc83ce2384" Jan 13 21:30:08.984715 containerd[1454]: time="2025-01-13T21:30:08.984596478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:08.985588 containerd[1454]: time="2025-01-13T21:30:08.985548358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 13 21:30:08.986810 containerd[1454]: time="2025-01-13T21:30:08.986770587Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:08.988882 containerd[1454]: time="2025-01-13T21:30:08.988853302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:08.989555 containerd[1454]: time="2025-01-13T21:30:08.989515845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.215790912s" Jan 13 21:30:08.989597 containerd[1454]: time="2025-01-13T21:30:08.989556090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 21:30:08.990470 containerd[1454]: time="2025-01-13T21:30:08.990342227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:30:09.000722 containerd[1454]: time="2025-01-13T21:30:09.000677696Z" level=info msg="CreateContainer within sandbox \"ad94401fdea24be6b87c663f169b7a33ed318d832137ad971a7956a2df3dda20\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:30:09.016496 containerd[1454]: time="2025-01-13T21:30:09.016440268Z" level=info msg="CreateContainer within sandbox \"ad94401fdea24be6b87c663f169b7a33ed318d832137ad971a7956a2df3dda20\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9ee875b5bec3ee9db96462c22f61b0cc6e83f61259258d20aa95ebddfa7b9da1\"" Jan 13 21:30:09.016844 containerd[1454]: time="2025-01-13T21:30:09.016813723Z" level=info msg="StartContainer for \"9ee875b5bec3ee9db96462c22f61b0cc6e83f61259258d20aa95ebddfa7b9da1\"" Jan 13 21:30:09.048377 systemd[1]: Started cri-containerd-9ee875b5bec3ee9db96462c22f61b0cc6e83f61259258d20aa95ebddfa7b9da1.scope - libcontainer container 9ee875b5bec3ee9db96462c22f61b0cc6e83f61259258d20aa95ebddfa7b9da1. Jan 13 21:30:09.086553 containerd[1454]: time="2025-01-13T21:30:09.086513298Z" level=info msg="StartContainer for \"9ee875b5bec3ee9db96462c22f61b0cc6e83f61259258d20aa95ebddfa7b9da1\" returns successfully" Jan 13 21:30:09.624896 kubelet[2501]: E0113 21:30:09.624864 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:09.634938 kubelet[2501]: I0113 21:30:09.634365 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85d9ff5964-pqn7g" podStartSLOduration=1.807230383 podStartE2EDuration="7.634351958s" podCreationTimestamp="2025-01-13 21:30:02 +0000 UTC" firstStartedPulling="2025-01-13 21:30:03.163098791 +0000 UTC m=+12.680923525" lastFinishedPulling="2025-01-13 21:30:08.990220356 +0000 UTC m=+18.508045100" observedRunningTime="2025-01-13 21:30:09.633991827 +0000 UTC m=+19.151816561" watchObservedRunningTime="2025-01-13 21:30:09.634351958 +0000 UTC m=+19.152176692" Jan 13 21:30:10.556270 kubelet[2501]: E0113 21:30:10.556187 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8rhh" podUID="e11df133-a251-4390-b19c-decc83ce2384" Jan 13 21:30:10.626364 kubelet[2501]: I0113 21:30:10.626315 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:30:10.626856 kubelet[2501]: E0113 21:30:10.626663 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:12.556978 kubelet[2501]: E0113 21:30:12.556121 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8rhh" podUID="e11df133-a251-4390-b19c-decc83ce2384" Jan 13 21:30:14.556356 kubelet[2501]: E0113 21:30:14.556300 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8rhh" podUID="e11df133-a251-4390-b19c-decc83ce2384" Jan 13 21:30:15.591854 containerd[1454]: time="2025-01-13T21:30:15.591797845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:15.592572 containerd[1454]: time="2025-01-13T21:30:15.592535625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:30:15.593674 containerd[1454]: time="2025-01-13T21:30:15.593641659Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:15.595933 containerd[1454]: time="2025-01-13T21:30:15.595897650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:15.596628 containerd[1454]: time="2025-01-13T21:30:15.596592460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.606214887s" Jan 13 21:30:15.596628 containerd[1454]: time="2025-01-13T21:30:15.596620152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:30:15.598257 containerd[1454]: time="2025-01-13T21:30:15.598231409Z" level=info msg="CreateContainer within sandbox \"295e58ca0c46180f07b7fe6983c5f128911559a44309e255956fa2f1a497830c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:30:15.611983 containerd[1454]: time="2025-01-13T21:30:15.611937747Z" level=info msg="CreateContainer within sandbox \"295e58ca0c46180f07b7fe6983c5f128911559a44309e255956fa2f1a497830c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b458e009ce65a79a910619f1a02ba0a7fe3f8ea1fe7b2ca20d5a098a6f648bb6\"" Jan 13 21:30:15.612422 containerd[1454]: time="2025-01-13T21:30:15.612382306Z" level=info msg="StartContainer for \"b458e009ce65a79a910619f1a02ba0a7fe3f8ea1fe7b2ca20d5a098a6f648bb6\"" Jan 13 21:30:15.646360 systemd[1]: Started cri-containerd-b458e009ce65a79a910619f1a02ba0a7fe3f8ea1fe7b2ca20d5a098a6f648bb6.scope - libcontainer container b458e009ce65a79a910619f1a02ba0a7fe3f8ea1fe7b2ca20d5a098a6f648bb6. Jan 13 21:30:15.676559 containerd[1454]: time="2025-01-13T21:30:15.676517565Z" level=info msg="StartContainer for \"b458e009ce65a79a910619f1a02ba0a7fe3f8ea1fe7b2ca20d5a098a6f648bb6\" returns successfully" Jan 13 21:30:16.582025 kubelet[2501]: E0113 21:30:16.581970 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8rhh" podUID="e11df133-a251-4390-b19c-decc83ce2384" Jan 13 21:30:16.654650 kubelet[2501]: E0113 21:30:16.654603 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:16.747371 systemd[1]: cri-containerd-b458e009ce65a79a910619f1a02ba0a7fe3f8ea1fe7b2ca20d5a098a6f648bb6.scope: Deactivated successfully. Jan 13 21:30:16.766030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b458e009ce65a79a910619f1a02ba0a7fe3f8ea1fe7b2ca20d5a098a6f648bb6-rootfs.mount: Deactivated successfully. Jan 13 21:30:16.786481 kubelet[2501]: I0113 21:30:16.786443 2501 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 21:30:16.869350 systemd[1]: Created slice kubepods-burstable-pode05a8b22_8ec1_444b_8447_61ff0cfe127a.slice - libcontainer container kubepods-burstable-pode05a8b22_8ec1_444b_8447_61ff0cfe127a.slice. Jan 13 21:30:16.873970 systemd[1]: Created slice kubepods-besteffort-podb087af71_417d_412a_8572_efae53d551a9.slice - libcontainer container kubepods-besteffort-podb087af71_417d_412a_8572_efae53d551a9.slice. Jan 13 21:30:16.879987 systemd[1]: Created slice kubepods-burstable-pod6c756d2a_245d_4d57_88a3_fb1081dae774.slice - libcontainer container kubepods-burstable-pod6c756d2a_245d_4d57_88a3_fb1081dae774.slice. Jan 13 21:30:16.884547 systemd[1]: Created slice kubepods-besteffort-poda78c9dd2_1fdd_4b9c_a54f_1f124cc6cdeb.slice - libcontainer container kubepods-besteffort-poda78c9dd2_1fdd_4b9c_a54f_1f124cc6cdeb.slice. Jan 13 21:30:16.889380 systemd[1]: Created slice kubepods-besteffort-podcf845328_8306_43d5_9593_6d711b68c954.slice - libcontainer container kubepods-besteffort-podcf845328_8306_43d5_9593_6d711b68c954.slice. Jan 13 21:30:17.023274 containerd[1454]: time="2025-01-13T21:30:17.023177185Z" level=info msg="shim disconnected" id=b458e009ce65a79a910619f1a02ba0a7fe3f8ea1fe7b2ca20d5a098a6f648bb6 namespace=k8s.io Jan 13 21:30:17.023274 containerd[1454]: time="2025-01-13T21:30:17.023263768Z" level=warning msg="cleaning up after shim disconnected" id=b458e009ce65a79a910619f1a02ba0a7fe3f8ea1fe7b2ca20d5a098a6f648bb6 namespace=k8s.io Jan 13 21:30:17.023274 containerd[1454]: time="2025-01-13T21:30:17.023275800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:17.034700 kubelet[2501]: I0113 21:30:17.034657 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b087af71-417d-412a-8572-efae53d551a9-calico-apiserver-certs\") pod \"calico-apiserver-6c77576767-x5sch\" (UID: \"b087af71-417d-412a-8572-efae53d551a9\") " pod="calico-apiserver/calico-apiserver-6c77576767-x5sch" Jan 13 21:30:17.034700 kubelet[2501]: I0113 21:30:17.034699 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6g6z\" (UniqueName: \"kubernetes.io/projected/6c756d2a-245d-4d57-88a3-fb1081dae774-kube-api-access-x6g6z\") pod \"coredns-6f6b679f8f-cx8tp\" (UID: \"6c756d2a-245d-4d57-88a3-fb1081dae774\") " pod="kube-system/coredns-6f6b679f8f-cx8tp" Jan 13 21:30:17.035804 kubelet[2501]: I0113 21:30:17.034718 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf845328-8306-43d5-9593-6d711b68c954-tigera-ca-bundle\") pod \"calico-kube-controllers-bff6bbb7-h6q6q\" (UID: \"cf845328-8306-43d5-9593-6d711b68c954\") " pod="calico-system/calico-kube-controllers-bff6bbb7-h6q6q" Jan 13 21:30:17.035804 kubelet[2501]: I0113 21:30:17.034736 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w6td\" (UniqueName: \"kubernetes.io/projected/cf845328-8306-43d5-9593-6d711b68c954-kube-api-access-4w6td\") pod \"calico-kube-controllers-bff6bbb7-h6q6q\" (UID: \"cf845328-8306-43d5-9593-6d711b68c954\") " pod="calico-system/calico-kube-controllers-bff6bbb7-h6q6q" Jan 13 21:30:17.035804 kubelet[2501]: I0113 21:30:17.034764 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e05a8b22-8ec1-444b-8447-61ff0cfe127a-config-volume\") pod \"coredns-6f6b679f8f-ffhfs\" (UID: \"e05a8b22-8ec1-444b-8447-61ff0cfe127a\") " pod="kube-system/coredns-6f6b679f8f-ffhfs" Jan 13 21:30:17.035804 kubelet[2501]: I0113 21:30:17.034802 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvswt\" (UniqueName: \"kubernetes.io/projected/e05a8b22-8ec1-444b-8447-61ff0cfe127a-kube-api-access-gvswt\") pod \"coredns-6f6b679f8f-ffhfs\" (UID: \"e05a8b22-8ec1-444b-8447-61ff0cfe127a\") " pod="kube-system/coredns-6f6b679f8f-ffhfs" Jan 13 21:30:17.035804 kubelet[2501]: I0113 21:30:17.034820 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmdj2\" (UniqueName: \"kubernetes.io/projected/a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb-kube-api-access-gmdj2\") pod \"calico-apiserver-6c77576767-dnhjp\" (UID: \"a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb\") " pod="calico-apiserver/calico-apiserver-6c77576767-dnhjp" Jan 13 21:30:17.035945 kubelet[2501]: I0113 21:30:17.034836 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb-calico-apiserver-certs\") pod \"calico-apiserver-6c77576767-dnhjp\" (UID: \"a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb\") " pod="calico-apiserver/calico-apiserver-6c77576767-dnhjp" Jan 13 21:30:17.035945 kubelet[2501]: I0113 21:30:17.034856 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmsbf\" (UniqueName: \"kubernetes.io/projected/b087af71-417d-412a-8572-efae53d551a9-kube-api-access-nmsbf\") pod \"calico-apiserver-6c77576767-x5sch\" (UID: \"b087af71-417d-412a-8572-efae53d551a9\") " pod="calico-apiserver/calico-apiserver-6c77576767-x5sch" Jan 13 21:30:17.035945 kubelet[2501]: I0113 21:30:17.034871 2501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c756d2a-245d-4d57-88a3-fb1081dae774-config-volume\") pod \"coredns-6f6b679f8f-cx8tp\" (UID: \"6c756d2a-245d-4d57-88a3-fb1081dae774\") " pod="kube-system/coredns-6f6b679f8f-cx8tp" Jan 13 21:30:17.172630 kubelet[2501]: E0113 21:30:17.172520 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:17.173237 containerd[1454]: time="2025-01-13T21:30:17.173162689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ffhfs,Uid:e05a8b22-8ec1-444b-8447-61ff0cfe127a,Namespace:kube-system,Attempt:0,}" Jan 13 21:30:17.177186 containerd[1454]: time="2025-01-13T21:30:17.177146442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77576767-x5sch,Uid:b087af71-417d-412a-8572-efae53d551a9,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:30:17.182370 kubelet[2501]: E0113 21:30:17.182329 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:17.182796 containerd[1454]: time="2025-01-13T21:30:17.182758010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cx8tp,Uid:6c756d2a-245d-4d57-88a3-fb1081dae774,Namespace:kube-system,Attempt:0,}" Jan 13 21:30:17.188524 containerd[1454]: time="2025-01-13T21:30:17.188487701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77576767-dnhjp,Uid:a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:30:17.193975 containerd[1454]: time="2025-01-13T21:30:17.193944599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bff6bbb7-h6q6q,Uid:cf845328-8306-43d5-9593-6d711b68c954,Namespace:calico-system,Attempt:0,}" Jan 13 21:30:17.405121 containerd[1454]: time="2025-01-13T21:30:17.404882738Z" level=error msg="Failed to destroy network for sandbox \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.406948 containerd[1454]: time="2025-01-13T21:30:17.406743783Z" level=error msg="encountered an error cleaning up failed sandbox \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.406948 containerd[1454]: time="2025-01-13T21:30:17.406792825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77576767-x5sch,Uid:b087af71-417d-412a-8572-efae53d551a9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.407089 kubelet[2501]: E0113 21:30:17.407048 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.407218 kubelet[2501]: E0113 21:30:17.407142 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c77576767-x5sch" Jan 13 21:30:17.407218 kubelet[2501]: E0113 21:30:17.407168 2501 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c77576767-x5sch" Jan 13 21:30:17.407273 kubelet[2501]: E0113 21:30:17.407232 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c77576767-x5sch_calico-apiserver(b087af71-417d-412a-8572-efae53d551a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c77576767-x5sch_calico-apiserver(b087af71-417d-412a-8572-efae53d551a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c77576767-x5sch" podUID="b087af71-417d-412a-8572-efae53d551a9" Jan 13 21:30:17.409343 containerd[1454]: time="2025-01-13T21:30:17.409219907Z" level=error msg="Failed to destroy network for sandbox \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.410302 containerd[1454]: time="2025-01-13T21:30:17.410267149Z" level=error msg="encountered an error cleaning up failed sandbox \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.410680 containerd[1454]: time="2025-01-13T21:30:17.410490890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ffhfs,Uid:e05a8b22-8ec1-444b-8447-61ff0cfe127a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.410680 containerd[1454]: time="2025-01-13T21:30:17.410353682Z" level=error msg="Failed to destroy network for sandbox \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.411183 kubelet[2501]: E0113 21:30:17.411110 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.411260 kubelet[2501]: E0113 21:30:17.411228 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ffhfs" Jan 13 21:30:17.411300 kubelet[2501]: E0113 21:30:17.411254 2501 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ffhfs" Jan 13 21:30:17.411327 kubelet[2501]: E0113 21:30:17.411296 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ffhfs_kube-system(e05a8b22-8ec1-444b-8447-61ff0cfe127a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ffhfs_kube-system(e05a8b22-8ec1-444b-8447-61ff0cfe127a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ffhfs" podUID="e05a8b22-8ec1-444b-8447-61ff0cfe127a" Jan 13 21:30:17.411669 containerd[1454]: time="2025-01-13T21:30:17.411645785Z" level=error msg="encountered an error cleaning up failed sandbox \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.411752 containerd[1454]: time="2025-01-13T21:30:17.411733480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cx8tp,Uid:6c756d2a-245d-4d57-88a3-fb1081dae774,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.411979 kubelet[2501]: E0113 21:30:17.411933 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.412020 kubelet[2501]: E0113 21:30:17.411975 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-cx8tp" Jan 13 21:30:17.412020 kubelet[2501]: E0113 21:30:17.411997 2501 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-cx8tp" Jan 13 21:30:17.412102 kubelet[2501]: E0113 21:30:17.412028 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-cx8tp_kube-system(6c756d2a-245d-4d57-88a3-fb1081dae774)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-cx8tp_kube-system(6c756d2a-245d-4d57-88a3-fb1081dae774)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-cx8tp" podUID="6c756d2a-245d-4d57-88a3-fb1081dae774" Jan 13 21:30:17.418267 containerd[1454]: time="2025-01-13T21:30:17.418145136Z" level=error msg="Failed to destroy network for sandbox \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.418730 containerd[1454]: time="2025-01-13T21:30:17.418691455Z" level=error msg="encountered an error cleaning up failed sandbox \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.418856 containerd[1454]: time="2025-01-13T21:30:17.418760815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bff6bbb7-h6q6q,Uid:cf845328-8306-43d5-9593-6d711b68c954,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.419053 kubelet[2501]: E0113 21:30:17.419020 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.419131 kubelet[2501]: E0113 21:30:17.419067 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bff6bbb7-h6q6q" Jan 13 21:30:17.419131 kubelet[2501]: E0113 21:30:17.419089 2501 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bff6bbb7-h6q6q" Jan 13 21:30:17.419333 kubelet[2501]: E0113 21:30:17.419147 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bff6bbb7-h6q6q_calico-system(cf845328-8306-43d5-9593-6d711b68c954)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bff6bbb7-h6q6q_calico-system(cf845328-8306-43d5-9593-6d711b68c954)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bff6bbb7-h6q6q" podUID="cf845328-8306-43d5-9593-6d711b68c954" Jan 13 21:30:17.426931 containerd[1454]: time="2025-01-13T21:30:17.426829492Z" level=error msg="Failed to destroy network for sandbox \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.427302 containerd[1454]: time="2025-01-13T21:30:17.427262757Z" level=error msg="encountered an error cleaning up failed sandbox \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.427380 containerd[1454]: time="2025-01-13T21:30:17.427323392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77576767-dnhjp,Uid:a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.427540 kubelet[2501]: E0113 21:30:17.427489 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.427600 kubelet[2501]: E0113 21:30:17.427536 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c77576767-dnhjp" Jan 13 21:30:17.427600 kubelet[2501]: E0113 21:30:17.427567 2501 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c77576767-dnhjp" Jan 13 21:30:17.427703 kubelet[2501]: E0113 21:30:17.427610 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c77576767-dnhjp_calico-apiserver(a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c77576767-dnhjp_calico-apiserver(a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c77576767-dnhjp" podUID="a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb" Jan 13 21:30:17.657060 kubelet[2501]: I0113 21:30:17.657028 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:17.657808 containerd[1454]: time="2025-01-13T21:30:17.657766597Z" level=info msg="StopPodSandbox for \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\"" Jan 13 21:30:17.657986 containerd[1454]: time="2025-01-13T21:30:17.657963016Z" level=info msg="Ensure that sandbox 00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64 in task-service has been cleanup successfully" Jan 13 21:30:17.658284 kubelet[2501]: I0113 21:30:17.658255 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:17.659057 containerd[1454]: time="2025-01-13T21:30:17.658745721Z" level=info msg="StopPodSandbox for \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\"" Jan 13 21:30:17.659057 containerd[1454]: time="2025-01-13T21:30:17.658884773Z" level=info msg="Ensure that sandbox 53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d in task-service has been cleanup successfully" Jan 13 21:30:17.659768 kubelet[2501]: I0113 21:30:17.659743 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:17.660296 containerd[1454]: time="2025-01-13T21:30:17.660228984Z" level=info msg="StopPodSandbox for \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\"" Jan 13 21:30:17.660713 containerd[1454]: time="2025-01-13T21:30:17.660541292Z" level=info msg="Ensure that sandbox 1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19 in task-service has been cleanup successfully" Jan 13 21:30:17.664501 kubelet[2501]: E0113 21:30:17.664449 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:17.665718 containerd[1454]: time="2025-01-13T21:30:17.665670232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:30:17.666910 kubelet[2501]: I0113 21:30:17.666874 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:17.668839 containerd[1454]: time="2025-01-13T21:30:17.667972978Z" level=info msg="StopPodSandbox for \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\"" Jan 13 21:30:17.668839 containerd[1454]: time="2025-01-13T21:30:17.668496114Z" level=info msg="Ensure that sandbox 830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4 in task-service has been cleanup successfully" Jan 13 21:30:17.669272 kubelet[2501]: I0113 21:30:17.669249 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:17.669869 containerd[1454]: time="2025-01-13T21:30:17.669823203Z" level=info msg="StopPodSandbox for \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\"" Jan 13 21:30:17.672377 containerd[1454]: time="2025-01-13T21:30:17.672333330Z" level=info msg="Ensure that sandbox b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2 in task-service has been cleanup successfully" Jan 13 21:30:17.695861 containerd[1454]: time="2025-01-13T21:30:17.695697002Z" level=error msg="StopPodSandbox for \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\" failed" error="failed to destroy network for sandbox \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.696892 kubelet[2501]: E0113 21:30:17.696674 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:17.696892 kubelet[2501]: E0113 21:30:17.696747 2501 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64"} Jan 13 21:30:17.696892 kubelet[2501]: E0113 21:30:17.696823 2501 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf845328-8306-43d5-9593-6d711b68c954\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:17.696892 kubelet[2501]: E0113 21:30:17.696853 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf845328-8306-43d5-9593-6d711b68c954\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bff6bbb7-h6q6q" podUID="cf845328-8306-43d5-9593-6d711b68c954" Jan 13 21:30:17.701708 containerd[1454]: time="2025-01-13T21:30:17.701663880Z" level=error msg="StopPodSandbox for \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\" failed" error="failed to destroy network for sandbox \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.702049 kubelet[2501]: E0113 21:30:17.702021 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:17.702150 kubelet[2501]: E0113 21:30:17.702130 2501 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19"} Jan 13 21:30:17.702289 kubelet[2501]: E0113 21:30:17.702229 2501 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e05a8b22-8ec1-444b-8447-61ff0cfe127a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:17.702289 kubelet[2501]: E0113 21:30:17.702262 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e05a8b22-8ec1-444b-8447-61ff0cfe127a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ffhfs" podUID="e05a8b22-8ec1-444b-8447-61ff0cfe127a" Jan 13 21:30:17.705105 containerd[1454]: time="2025-01-13T21:30:17.705038426Z" level=error msg="StopPodSandbox for \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\" failed" error="failed to destroy network for sandbox \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.705403 kubelet[2501]: E0113 21:30:17.705370 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:17.705578 kubelet[2501]: E0113 21:30:17.705485 2501 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d"} Jan 13 21:30:17.705578 kubelet[2501]: E0113 21:30:17.705522 2501 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:17.705578 kubelet[2501]: E0113 21:30:17.705548 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c77576767-dnhjp" podUID="a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb" Jan 13 21:30:17.712047 containerd[1454]: time="2025-01-13T21:30:17.711998755Z" level=error msg="StopPodSandbox for \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\" failed" error="failed to destroy network for sandbox \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.712319 kubelet[2501]: E0113 21:30:17.712285 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:17.712387 kubelet[2501]: E0113 21:30:17.712330 2501 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2"} Jan 13 21:30:17.712387 kubelet[2501]: E0113 21:30:17.712375 2501 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b087af71-417d-412a-8572-efae53d551a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:17.712465 kubelet[2501]: E0113 21:30:17.712395 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b087af71-417d-412a-8572-efae53d551a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c77576767-x5sch" podUID="b087af71-417d-412a-8572-efae53d551a9" Jan 13 21:30:17.717250 containerd[1454]: time="2025-01-13T21:30:17.717212223Z" level=error msg="StopPodSandbox for \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\" failed" error="failed to destroy network for sandbox \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:17.717436 kubelet[2501]: E0113 21:30:17.717397 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:17.717481 kubelet[2501]: E0113 21:30:17.717451 2501 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4"} Jan 13 21:30:17.717507 kubelet[2501]: E0113 21:30:17.717484 2501 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c756d2a-245d-4d57-88a3-fb1081dae774\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:17.717557 kubelet[2501]: E0113 21:30:17.717506 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c756d2a-245d-4d57-88a3-fb1081dae774\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-cx8tp" podUID="6c756d2a-245d-4d57-88a3-fb1081dae774" Jan 13 21:30:18.561905 systemd[1]: Created slice kubepods-besteffort-pode11df133_a251_4390_b19c_decc83ce2384.slice - libcontainer container kubepods-besteffort-pode11df133_a251_4390_b19c_decc83ce2384.slice. Jan 13 21:30:18.563869 containerd[1454]: time="2025-01-13T21:30:18.563832982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8rhh,Uid:e11df133-a251-4390-b19c-decc83ce2384,Namespace:calico-system,Attempt:0,}" Jan 13 21:30:18.770331 containerd[1454]: time="2025-01-13T21:30:18.770275231Z" level=error msg="Failed to destroy network for sandbox \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:18.770760 containerd[1454]: time="2025-01-13T21:30:18.770725900Z" level=error msg="encountered an error cleaning up failed sandbox \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:18.770818 containerd[1454]: time="2025-01-13T21:30:18.770791022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8rhh,Uid:e11df133-a251-4390-b19c-decc83ce2384,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:18.771257 kubelet[2501]: E0113 21:30:18.771209 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:18.771749 kubelet[2501]: E0113 21:30:18.771296 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s8rhh" Jan 13 21:30:18.771749 kubelet[2501]: E0113 21:30:18.771323 2501 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s8rhh" Jan 13 21:30:18.771749 kubelet[2501]: E0113 21:30:18.771380 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s8rhh_calico-system(e11df133-a251-4390-b19c-decc83ce2384)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s8rhh_calico-system(e11df133-a251-4390-b19c-decc83ce2384)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s8rhh" podUID="e11df133-a251-4390-b19c-decc83ce2384" Jan 13 21:30:18.772914 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105-shm.mount: Deactivated successfully. Jan 13 21:30:19.672976 kubelet[2501]: I0113 21:30:19.672945 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:19.673528 containerd[1454]: time="2025-01-13T21:30:19.673486432Z" level=info msg="StopPodSandbox for \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\"" Jan 13 21:30:19.702837 containerd[1454]: time="2025-01-13T21:30:19.702796364Z" level=info msg="Ensure that sandbox 6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105 in task-service has been cleanup successfully" Jan 13 21:30:19.732046 containerd[1454]: time="2025-01-13T21:30:19.732002240Z" level=error msg="StopPodSandbox for \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\" failed" error="failed to destroy network for sandbox \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:19.732306 kubelet[2501]: E0113 21:30:19.732238 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:19.732306 kubelet[2501]: E0113 21:30:19.732293 2501 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105"} Jan 13 21:30:19.732484 kubelet[2501]: E0113 21:30:19.732324 2501 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e11df133-a251-4390-b19c-decc83ce2384\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:19.732484 kubelet[2501]: E0113 21:30:19.732348 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e11df133-a251-4390-b19c-decc83ce2384\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s8rhh" podUID="e11df133-a251-4390-b19c-decc83ce2384" Jan 13 21:30:20.316273 systemd[1]: Started sshd@9-10.0.0.148:22-10.0.0.1:43084.service - OpenSSH per-connection server daemon (10.0.0.1:43084). Jan 13 21:30:20.357595 sshd[3650]: Accepted publickey for core from 10.0.0.1 port 43084 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:20.359152 sshd[3650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:20.364105 systemd-logind[1437]: New session 10 of user core. Jan 13 21:30:20.373334 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:30:20.486401 sshd[3650]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:20.490510 systemd[1]: sshd@9-10.0.0.148:22-10.0.0.1:43084.service: Deactivated successfully. Jan 13 21:30:20.492728 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:30:20.493461 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:30:20.494262 systemd-logind[1437]: Removed session 10. Jan 13 21:30:23.894898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount23552056.mount: Deactivated successfully. Jan 13 21:30:24.309627 containerd[1454]: time="2025-01-13T21:30:24.285301955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:30:24.309627 containerd[1454]: time="2025-01-13T21:30:24.309595047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:24.310104 containerd[1454]: time="2025-01-13T21:30:24.308982074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.643269653s" Jan 13 21:30:24.310104 containerd[1454]: time="2025-01-13T21:30:24.309728548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:30:24.310349 containerd[1454]: time="2025-01-13T21:30:24.310324669Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:24.310847 containerd[1454]: time="2025-01-13T21:30:24.310812797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:24.319128 containerd[1454]: time="2025-01-13T21:30:24.319082547Z" level=info msg="CreateContainer within sandbox \"295e58ca0c46180f07b7fe6983c5f128911559a44309e255956fa2f1a497830c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:30:24.562634 containerd[1454]: time="2025-01-13T21:30:24.562492816Z" level=info msg="CreateContainer within sandbox \"295e58ca0c46180f07b7fe6983c5f128911559a44309e255956fa2f1a497830c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fc2d68d6a02b0bcea262123eb57ec1ca963996bb69046bb96cbd999b6becc16a\"" Jan 13 21:30:24.562936 containerd[1454]: time="2025-01-13T21:30:24.562885485Z" level=info msg="StartContainer for \"fc2d68d6a02b0bcea262123eb57ec1ca963996bb69046bb96cbd999b6becc16a\"" Jan 13 21:30:24.636353 systemd[1]: Started cri-containerd-fc2d68d6a02b0bcea262123eb57ec1ca963996bb69046bb96cbd999b6becc16a.scope - libcontainer container fc2d68d6a02b0bcea262123eb57ec1ca963996bb69046bb96cbd999b6becc16a. Jan 13 21:30:24.847049 containerd[1454]: time="2025-01-13T21:30:24.846825217Z" level=info msg="StartContainer for \"fc2d68d6a02b0bcea262123eb57ec1ca963996bb69046bb96cbd999b6becc16a\" returns successfully" Jan 13 21:30:24.850105 kubelet[2501]: E0113 21:30:24.850015 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:24.854634 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:30:24.854710 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:30:24.889786 kubelet[2501]: I0113 21:30:24.889727 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bfqqr" podStartSLOduration=1.729866932 podStartE2EDuration="22.889711472s" podCreationTimestamp="2025-01-13 21:30:02 +0000 UTC" firstStartedPulling="2025-01-13 21:30:03.151178472 +0000 UTC m=+12.669003206" lastFinishedPulling="2025-01-13 21:30:24.311023012 +0000 UTC m=+33.828847746" observedRunningTime="2025-01-13 21:30:24.886135718 +0000 UTC m=+34.403960452" watchObservedRunningTime="2025-01-13 21:30:24.889711472 +0000 UTC m=+34.407536206" Jan 13 21:30:25.498538 systemd[1]: Started sshd@10-10.0.0.148:22-10.0.0.1:43090.service - OpenSSH per-connection server daemon (10.0.0.1:43090). Jan 13 21:30:25.555235 sshd[3740]: Accepted publickey for core from 10.0.0.1 port 43090 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:25.556858 sshd[3740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:25.562021 systemd-logind[1437]: New session 11 of user core. Jan 13 21:30:25.567335 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:30:25.692576 sshd[3740]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:25.696054 systemd[1]: sshd@10-10.0.0.148:22-10.0.0.1:43090.service: Deactivated successfully. Jan 13 21:30:25.698001 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:30:25.698559 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:30:25.699408 systemd-logind[1437]: Removed session 11. Jan 13 21:30:25.852354 kubelet[2501]: E0113 21:30:25.852105 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:26.853729 kubelet[2501]: E0113 21:30:26.853670 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:28.557992 containerd[1454]: time="2025-01-13T21:30:28.557343646Z" level=info msg="StopPodSandbox for \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\"" Jan 13 21:30:28.659443 containerd[1454]: 2025-01-13 21:30:28.600 [INFO][4009] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:28.659443 containerd[1454]: 2025-01-13 21:30:28.601 [INFO][4009] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" iface="eth0" netns="/var/run/netns/cni-d4fded4a-aa83-37b3-0aee-3f2611368e0e" Jan 13 21:30:28.659443 containerd[1454]: 2025-01-13 21:30:28.601 [INFO][4009] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" iface="eth0" netns="/var/run/netns/cni-d4fded4a-aa83-37b3-0aee-3f2611368e0e" Jan 13 21:30:28.659443 containerd[1454]: 2025-01-13 21:30:28.602 [INFO][4009] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" iface="eth0" netns="/var/run/netns/cni-d4fded4a-aa83-37b3-0aee-3f2611368e0e" Jan 13 21:30:28.659443 containerd[1454]: 2025-01-13 21:30:28.602 [INFO][4009] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:28.659443 containerd[1454]: 2025-01-13 21:30:28.602 [INFO][4009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:28.659443 containerd[1454]: 2025-01-13 21:30:28.647 [INFO][4016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" HandleID="k8s-pod-network.b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Workload="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:28.659443 containerd[1454]: 2025-01-13 21:30:28.647 [INFO][4016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:28.659443 containerd[1454]: 2025-01-13 21:30:28.648 [INFO][4016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:28.659443 containerd[1454]: 2025-01-13 21:30:28.653 [WARNING][4016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" HandleID="k8s-pod-network.b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Workload="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:28.659443 containerd[1454]: 2025-01-13 21:30:28.653 [INFO][4016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" HandleID="k8s-pod-network.b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Workload="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:28.659443 containerd[1454]: 2025-01-13 21:30:28.654 [INFO][4016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:28.659443 containerd[1454]: 2025-01-13 21:30:28.657 [INFO][4009] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:28.659824 containerd[1454]: time="2025-01-13T21:30:28.659616345Z" level=info msg="TearDown network for sandbox \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\" successfully" Jan 13 21:30:28.659824 containerd[1454]: time="2025-01-13T21:30:28.659642936Z" level=info msg="StopPodSandbox for \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\" returns successfully" Jan 13 21:30:28.660429 containerd[1454]: time="2025-01-13T21:30:28.660382495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77576767-x5sch,Uid:b087af71-417d-412a-8572-efae53d551a9,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:30:28.662140 systemd[1]: run-netns-cni\x2dd4fded4a\x2daa83\x2d37b3\x2d0aee\x2d3f2611368e0e.mount: Deactivated successfully. Jan 13 21:30:28.953947 systemd-networkd[1400]: cali8d099badae7: Link UP Jan 13 21:30:28.954600 systemd-networkd[1400]: cali8d099badae7: Gained carrier Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.867 [INFO][4023] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.877 [INFO][4023] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0 calico-apiserver-6c77576767- calico-apiserver b087af71-417d-412a-8572-efae53d551a9 861 0 2025-01-13 21:30:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c77576767 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c77576767-x5sch eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8d099badae7 [] []}} ContainerID="58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-x5sch" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--x5sch-" Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.877 [INFO][4023] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-x5sch" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.901 [INFO][4037] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" HandleID="k8s-pod-network.58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" Workload="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.908 [INFO][4037] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" HandleID="k8s-pod-network.58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" Workload="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000609700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c77576767-x5sch", "timestamp":"2025-01-13 21:30:28.901917548 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.909 [INFO][4037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.909 [INFO][4037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.909 [INFO][4037] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.910 [INFO][4037] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" host="localhost" Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.914 [INFO][4037] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.917 [INFO][4037] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.918 [INFO][4037] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.920 [INFO][4037] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.920 [INFO][4037] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" host="localhost" Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.921 [INFO][4037] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1 Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.924 [INFO][4037] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" host="localhost" Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.932 [INFO][4037] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" host="localhost" Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.932 [INFO][4037] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" host="localhost" Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.932 [INFO][4037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:28.981297 containerd[1454]: 2025-01-13 21:30:28.932 [INFO][4037] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" HandleID="k8s-pod-network.58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" Workload="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:28.981848 containerd[1454]: 2025-01-13 21:30:28.937 [INFO][4023] cni-plugin/k8s.go 386: Populated endpoint ContainerID="58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-x5sch" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0", GenerateName:"calico-apiserver-6c77576767-", Namespace:"calico-apiserver", SelfLink:"", UID:"b087af71-417d-412a-8572-efae53d551a9", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77576767", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c77576767-x5sch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8d099badae7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:28.981848 containerd[1454]: 2025-01-13 21:30:28.937 [INFO][4023] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-x5sch" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:28.981848 containerd[1454]: 2025-01-13 21:30:28.937 [INFO][4023] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d099badae7 ContainerID="58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-x5sch" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:28.981848 containerd[1454]: 2025-01-13 21:30:28.954 [INFO][4023] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-x5sch" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:28.981848 containerd[1454]: 2025-01-13 21:30:28.967 [INFO][4023] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-x5sch" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0", GenerateName:"calico-apiserver-6c77576767-", Namespace:"calico-apiserver", SelfLink:"", UID:"b087af71-417d-412a-8572-efae53d551a9", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77576767", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1", Pod:"calico-apiserver-6c77576767-x5sch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8d099badae7", MAC:"1e:3d:35:6e:a9:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:28.981848 containerd[1454]: 2025-01-13 21:30:28.978 [INFO][4023] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-x5sch" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:29.009267 containerd[1454]: time="2025-01-13T21:30:29.009150959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:29.009877 containerd[1454]: time="2025-01-13T21:30:29.009268339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:29.010073 containerd[1454]: time="2025-01-13T21:30:29.009844752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:29.010073 containerd[1454]: time="2025-01-13T21:30:29.010025041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:29.042330 systemd[1]: Started cri-containerd-58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1.scope - libcontainer container 58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1. Jan 13 21:30:29.053475 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:30:29.076540 containerd[1454]: time="2025-01-13T21:30:29.076498928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77576767-x5sch,Uid:b087af71-417d-412a-8572-efae53d551a9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1\"" Jan 13 21:30:29.077996 containerd[1454]: time="2025-01-13T21:30:29.077955715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:30:29.672041 kubelet[2501]: I0113 21:30:29.671994 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:30:29.672470 kubelet[2501]: E0113 21:30:29.672378 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:29.868700 kubelet[2501]: E0113 21:30:29.868672 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:30.148345 systemd-networkd[1400]: cali8d099badae7: Gained IPv6LL Jan 13 21:30:30.391239 kernel: bpftool[4166]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:30:30.557569 containerd[1454]: time="2025-01-13T21:30:30.557416273Z" level=info msg="StopPodSandbox for \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\"" Jan 13 21:30:30.639429 containerd[1454]: 2025-01-13 21:30:30.597 [INFO][4199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:30.639429 containerd[1454]: 2025-01-13 21:30:30.597 [INFO][4199] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" iface="eth0" netns="/var/run/netns/cni-3d007a02-9240-8725-5756-4634a7a9ae35" Jan 13 21:30:30.639429 containerd[1454]: 2025-01-13 21:30:30.599 [INFO][4199] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" iface="eth0" netns="/var/run/netns/cni-3d007a02-9240-8725-5756-4634a7a9ae35" Jan 13 21:30:30.639429 containerd[1454]: 2025-01-13 21:30:30.600 [INFO][4199] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" iface="eth0" netns="/var/run/netns/cni-3d007a02-9240-8725-5756-4634a7a9ae35" Jan 13 21:30:30.639429 containerd[1454]: 2025-01-13 21:30:30.600 [INFO][4199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:30.639429 containerd[1454]: 2025-01-13 21:30:30.600 [INFO][4199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:30.639429 containerd[1454]: 2025-01-13 21:30:30.623 [INFO][4210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" HandleID="k8s-pod-network.830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Workload="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:30.639429 containerd[1454]: 2025-01-13 21:30:30.624 [INFO][4210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:30.639429 containerd[1454]: 2025-01-13 21:30:30.624 [INFO][4210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:30.639429 containerd[1454]: 2025-01-13 21:30:30.629 [WARNING][4210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" HandleID="k8s-pod-network.830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Workload="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:30.639429 containerd[1454]: 2025-01-13 21:30:30.629 [INFO][4210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" HandleID="k8s-pod-network.830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Workload="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:30.639429 containerd[1454]: 2025-01-13 21:30:30.630 [INFO][4210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:30.639429 containerd[1454]: 2025-01-13 21:30:30.634 [INFO][4199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:30.640414 containerd[1454]: time="2025-01-13T21:30:30.640274968Z" level=info msg="TearDown network for sandbox \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\" successfully" Jan 13 21:30:30.640414 containerd[1454]: time="2025-01-13T21:30:30.640309713Z" level=info msg="StopPodSandbox for \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\" returns successfully" Jan 13 21:30:30.641447 kubelet[2501]: E0113 21:30:30.640718 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:30.642124 containerd[1454]: time="2025-01-13T21:30:30.641751041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cx8tp,Uid:6c756d2a-245d-4d57-88a3-fb1081dae774,Namespace:kube-system,Attempt:1,}" Jan 13 21:30:30.645586 systemd[1]: run-netns-cni\x2d3d007a02\x2d9240\x2d8725\x2d5756\x2d4634a7a9ae35.mount: Deactivated successfully. Jan 13 21:30:30.669104 systemd-networkd[1400]: vxlan.calico: Link UP Jan 13 21:30:30.669117 systemd-networkd[1400]: vxlan.calico: Gained carrier Jan 13 21:30:30.713442 systemd[1]: Started sshd@11-10.0.0.148:22-10.0.0.1:48550.service - OpenSSH per-connection server daemon (10.0.0.1:48550). Jan 13 21:30:30.755907 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 48550 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:30.757603 sshd[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:30.763685 systemd-logind[1437]: New session 12 of user core. Jan 13 21:30:30.767339 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:30:30.804404 systemd-networkd[1400]: calia2ea1655c9d: Link UP Jan 13 21:30:30.804675 systemd-networkd[1400]: calia2ea1655c9d: Gained carrier Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.731 [INFO][4250] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0 coredns-6f6b679f8f- kube-system 6c756d2a-245d-4d57-88a3-fb1081dae774 884 0 2025-01-13 21:29:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-cx8tp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia2ea1655c9d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cx8tp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cx8tp-" Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.731 [INFO][4250] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cx8tp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.764 [INFO][4273] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" HandleID="k8s-pod-network.d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" Workload="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.772 [INFO][4273] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" HandleID="k8s-pod-network.d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" Workload="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004dbd50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-cx8tp", "timestamp":"2025-01-13 21:30:30.764026874 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.772 [INFO][4273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.772 [INFO][4273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.772 [INFO][4273] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.774 [INFO][4273] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" host="localhost" Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.778 [INFO][4273] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.782 [INFO][4273] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.785 [INFO][4273] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.787 [INFO][4273] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.787 [INFO][4273] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" host="localhost" Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.788 [INFO][4273] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7 Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.792 [INFO][4273] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" host="localhost" Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.796 [INFO][4273] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" host="localhost" Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.796 [INFO][4273] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" host="localhost" Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.796 [INFO][4273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:31.044187 containerd[1454]: 2025-01-13 21:30:30.796 [INFO][4273] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" HandleID="k8s-pod-network.d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" Workload="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:31.044779 containerd[1454]: 2025-01-13 21:30:30.799 [INFO][4250] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cx8tp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6c756d2a-245d-4d57-88a3-fb1081dae774", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-cx8tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2ea1655c9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:31.044779 containerd[1454]: 2025-01-13 21:30:30.799 [INFO][4250] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cx8tp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:31.044779 containerd[1454]: 2025-01-13 21:30:30.799 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2ea1655c9d ContainerID="d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cx8tp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:31.044779 containerd[1454]: 2025-01-13 21:30:30.803 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cx8tp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:31.044779 containerd[1454]: 2025-01-13 21:30:30.803 [INFO][4250] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cx8tp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6c756d2a-245d-4d57-88a3-fb1081dae774", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7", Pod:"coredns-6f6b679f8f-cx8tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2ea1655c9d", MAC:"8a:19:5c:72:94:01", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:31.044779 containerd[1454]: 2025-01-13 21:30:31.040 [INFO][4250] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cx8tp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:31.046394 sshd[4269]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:31.049299 systemd[1]: sshd@11-10.0.0.148:22-10.0.0.1:48550.service: Deactivated successfully. Jan 13 21:30:31.051127 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:30:31.052497 systemd-logind[1437]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:30:31.053618 systemd-logind[1437]: Removed session 12. Jan 13 21:30:31.222561 containerd[1454]: time="2025-01-13T21:30:31.221607424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:31.222561 containerd[1454]: time="2025-01-13T21:30:31.222347965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:31.222561 containerd[1454]: time="2025-01-13T21:30:31.222372662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:31.222561 containerd[1454]: time="2025-01-13T21:30:31.222487428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:31.250382 systemd[1]: Started cri-containerd-d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7.scope - libcontainer container d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7. Jan 13 21:30:31.264320 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:30:31.292398 containerd[1454]: time="2025-01-13T21:30:31.292345071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cx8tp,Uid:6c756d2a-245d-4d57-88a3-fb1081dae774,Namespace:kube-system,Attempt:1,} returns sandbox id \"d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7\"" Jan 13 21:30:31.293322 kubelet[2501]: E0113 21:30:31.293302 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:31.297109 containerd[1454]: time="2025-01-13T21:30:31.297027508Z" level=info msg="CreateContainer within sandbox \"d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:30:31.310832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1001933381.mount: Deactivated successfully. Jan 13 21:30:31.316918 containerd[1454]: time="2025-01-13T21:30:31.316792469Z" level=info msg="CreateContainer within sandbox \"d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5081de4f72c110398739d821f3aff7f27394234b96cc3342f8fdc1f23000db18\"" Jan 13 21:30:31.317837 containerd[1454]: time="2025-01-13T21:30:31.317814949Z" level=info msg="StartContainer for \"5081de4f72c110398739d821f3aff7f27394234b96cc3342f8fdc1f23000db18\"" Jan 13 21:30:31.350337 systemd[1]: Started cri-containerd-5081de4f72c110398739d821f3aff7f27394234b96cc3342f8fdc1f23000db18.scope - libcontainer container 5081de4f72c110398739d821f3aff7f27394234b96cc3342f8fdc1f23000db18. Jan 13 21:30:31.379658 containerd[1454]: time="2025-01-13T21:30:31.379580531Z" level=info msg="StartContainer for \"5081de4f72c110398739d821f3aff7f27394234b96cc3342f8fdc1f23000db18\" returns successfully" Jan 13 21:30:31.557070 containerd[1454]: time="2025-01-13T21:30:31.556682475Z" level=info msg="StopPodSandbox for \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\"" Jan 13 21:30:31.557499 containerd[1454]: time="2025-01-13T21:30:31.557056287Z" level=info msg="StopPodSandbox for \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\"" Jan 13 21:30:31.728148 containerd[1454]: 2025-01-13 21:30:31.680 [INFO][4454] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:31.728148 containerd[1454]: 2025-01-13 21:30:31.680 [INFO][4454] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" iface="eth0" netns="/var/run/netns/cni-87145ae6-748f-3a6c-ef5d-b6899c3e9101" Jan 13 21:30:31.728148 containerd[1454]: 2025-01-13 21:30:31.680 [INFO][4454] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" iface="eth0" netns="/var/run/netns/cni-87145ae6-748f-3a6c-ef5d-b6899c3e9101" Jan 13 21:30:31.728148 containerd[1454]: 2025-01-13 21:30:31.681 [INFO][4454] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" iface="eth0" netns="/var/run/netns/cni-87145ae6-748f-3a6c-ef5d-b6899c3e9101" Jan 13 21:30:31.728148 containerd[1454]: 2025-01-13 21:30:31.681 [INFO][4454] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:31.728148 containerd[1454]: 2025-01-13 21:30:31.681 [INFO][4454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:31.728148 containerd[1454]: 2025-01-13 21:30:31.709 [INFO][4475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" HandleID="k8s-pod-network.00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Workload="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:31.728148 containerd[1454]: 2025-01-13 21:30:31.709 [INFO][4475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:31.728148 containerd[1454]: 2025-01-13 21:30:31.710 [INFO][4475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:31.728148 containerd[1454]: 2025-01-13 21:30:31.718 [WARNING][4475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" HandleID="k8s-pod-network.00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Workload="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:31.728148 containerd[1454]: 2025-01-13 21:30:31.718 [INFO][4475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" HandleID="k8s-pod-network.00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Workload="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:31.728148 containerd[1454]: 2025-01-13 21:30:31.720 [INFO][4475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:31.728148 containerd[1454]: 2025-01-13 21:30:31.724 [INFO][4454] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:31.728867 containerd[1454]: time="2025-01-13T21:30:31.728415481Z" level=info msg="TearDown network for sandbox \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\" successfully" Jan 13 21:30:31.728867 containerd[1454]: time="2025-01-13T21:30:31.728442612Z" level=info msg="StopPodSandbox for \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\" returns successfully" Jan 13 21:30:31.729080 containerd[1454]: time="2025-01-13T21:30:31.728944875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bff6bbb7-h6q6q,Uid:cf845328-8306-43d5-9593-6d711b68c954,Namespace:calico-system,Attempt:1,}" Jan 13 21:30:31.731658 systemd[1]: run-netns-cni\x2d87145ae6\x2d748f\x2d3a6c\x2def5d\x2db6899c3e9101.mount: Deactivated successfully. Jan 13 21:30:31.762183 containerd[1454]: 2025-01-13 21:30:31.719 [INFO][4463] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:31.762183 containerd[1454]: 2025-01-13 21:30:31.719 [INFO][4463] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" iface="eth0" netns="/var/run/netns/cni-3fea50ee-402d-2617-c33a-6ed974beb284" Jan 13 21:30:31.762183 containerd[1454]: 2025-01-13 21:30:31.719 [INFO][4463] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" iface="eth0" netns="/var/run/netns/cni-3fea50ee-402d-2617-c33a-6ed974beb284" Jan 13 21:30:31.762183 containerd[1454]: 2025-01-13 21:30:31.720 [INFO][4463] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" iface="eth0" netns="/var/run/netns/cni-3fea50ee-402d-2617-c33a-6ed974beb284" Jan 13 21:30:31.762183 containerd[1454]: 2025-01-13 21:30:31.720 [INFO][4463] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:31.762183 containerd[1454]: 2025-01-13 21:30:31.720 [INFO][4463] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:31.762183 containerd[1454]: 2025-01-13 21:30:31.749 [INFO][4483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" HandleID="k8s-pod-network.1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Workload="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:31.762183 containerd[1454]: 2025-01-13 21:30:31.749 [INFO][4483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:31.762183 containerd[1454]: 2025-01-13 21:30:31.749 [INFO][4483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:31.762183 containerd[1454]: 2025-01-13 21:30:31.754 [WARNING][4483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" HandleID="k8s-pod-network.1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Workload="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:31.762183 containerd[1454]: 2025-01-13 21:30:31.754 [INFO][4483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" HandleID="k8s-pod-network.1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Workload="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:31.762183 containerd[1454]: 2025-01-13 21:30:31.756 [INFO][4483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:31.762183 containerd[1454]: 2025-01-13 21:30:31.758 [INFO][4463] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:31.762183 containerd[1454]: time="2025-01-13T21:30:31.762142090Z" level=info msg="TearDown network for sandbox \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\" successfully" Jan 13 21:30:31.762183 containerd[1454]: time="2025-01-13T21:30:31.762165173Z" level=info msg="StopPodSandbox for \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\" returns successfully" Jan 13 21:30:31.762692 kubelet[2501]: E0113 21:30:31.762503 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:31.763546 containerd[1454]: time="2025-01-13T21:30:31.763518134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ffhfs,Uid:e05a8b22-8ec1-444b-8447-61ff0cfe127a,Namespace:kube-system,Attempt:1,}" Jan 13 21:30:31.767579 systemd[1]: run-netns-cni\x2d3fea50ee\x2d402d\x2d2617\x2dc33a\x2d6ed974beb284.mount: Deactivated successfully. Jan 13 21:30:31.772904 containerd[1454]: time="2025-01-13T21:30:31.772845287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:31.773759 containerd[1454]: time="2025-01-13T21:30:31.773719709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 21:30:31.775851 containerd[1454]: time="2025-01-13T21:30:31.775805327Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:31.780623 containerd[1454]: time="2025-01-13T21:30:31.780578845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:31.781457 containerd[1454]: time="2025-01-13T21:30:31.781424684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.703428872s" Jan 13 21:30:31.781507 containerd[1454]: time="2025-01-13T21:30:31.781459169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:30:31.783720 containerd[1454]: time="2025-01-13T21:30:31.783693536Z" level=info msg="CreateContainer within sandbox \"58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:30:31.804976 containerd[1454]: time="2025-01-13T21:30:31.804729774Z" level=info msg="CreateContainer within sandbox \"58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fbefe71db5946926417b91365405d47dcc1eb5c41161303738b2b2abb4582ecd\"" Jan 13 21:30:31.807465 containerd[1454]: time="2025-01-13T21:30:31.805308402Z" level=info msg="StartContainer for \"fbefe71db5946926417b91365405d47dcc1eb5c41161303738b2b2abb4582ecd\"" Jan 13 21:30:31.834319 systemd[1]: Started cri-containerd-fbefe71db5946926417b91365405d47dcc1eb5c41161303738b2b2abb4582ecd.scope - libcontainer container fbefe71db5946926417b91365405d47dcc1eb5c41161303738b2b2abb4582ecd. Jan 13 21:30:32.011109 containerd[1454]: time="2025-01-13T21:30:32.011054162Z" level=info msg="StartContainer for \"fbefe71db5946926417b91365405d47dcc1eb5c41161303738b2b2abb4582ecd\" returns successfully" Jan 13 21:30:32.016824 kubelet[2501]: E0113 21:30:32.016538 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:32.098090 kubelet[2501]: I0113 21:30:32.097796 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-cx8tp" podStartSLOduration=36.09777078 podStartE2EDuration="36.09777078s" podCreationTimestamp="2025-01-13 21:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:30:32.097099048 +0000 UTC m=+41.614923783" watchObservedRunningTime="2025-01-13 21:30:32.09777078 +0000 UTC m=+41.615595514" Jan 13 21:30:32.113893 kubelet[2501]: I0113 21:30:32.113822 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c77576767-x5sch" podStartSLOduration=27.409194936 podStartE2EDuration="30.113802725s" podCreationTimestamp="2025-01-13 21:30:02 +0000 UTC" firstStartedPulling="2025-01-13 21:30:29.077613822 +0000 UTC m=+38.595438556" lastFinishedPulling="2025-01-13 21:30:31.782221611 +0000 UTC m=+41.300046345" observedRunningTime="2025-01-13 21:30:32.111847763 +0000 UTC m=+41.629672497" watchObservedRunningTime="2025-01-13 21:30:32.113802725 +0000 UTC m=+41.631627459" Jan 13 21:30:32.130655 systemd-networkd[1400]: cali7c9b1423f76: Link UP Jan 13 21:30:32.131342 systemd-networkd[1400]: cali7c9b1423f76: Gained carrier Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:31.785 [INFO][4490] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0 calico-kube-controllers-bff6bbb7- calico-system cf845328-8306-43d5-9593-6d711b68c954 901 0 2025-01-13 21:30:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bff6bbb7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-bff6bbb7-h6q6q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7c9b1423f76 [] []}} ContainerID="7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" Namespace="calico-system" Pod="calico-kube-controllers-bff6bbb7-h6q6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-" Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:31.785 [INFO][4490] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" Namespace="calico-system" Pod="calico-kube-controllers-bff6bbb7-h6q6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:31.822 [INFO][4519] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" HandleID="k8s-pod-network.7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" Workload="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:31.828 [INFO][4519] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" HandleID="k8s-pod-network.7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" Workload="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000296f30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-bff6bbb7-h6q6q", "timestamp":"2025-01-13 21:30:31.822072584 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:31.828 [INFO][4519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:31.828 [INFO][4519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:31.828 [INFO][4519] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:31.831 [INFO][4519] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" host="localhost" Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:32.012 [INFO][4519] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:32.089 [INFO][4519] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:32.097 [INFO][4519] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:32.103 [INFO][4519] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:32.103 [INFO][4519] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" host="localhost" Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:32.105 [INFO][4519] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75 Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:32.115 [INFO][4519] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" host="localhost" Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:32.123 [INFO][4519] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" host="localhost" Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:32.123 [INFO][4519] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" host="localhost" Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:32.123 [INFO][4519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:32.145412 containerd[1454]: 2025-01-13 21:30:32.123 [INFO][4519] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" HandleID="k8s-pod-network.7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" Workload="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:32.145985 containerd[1454]: 2025-01-13 21:30:32.128 [INFO][4490] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" Namespace="calico-system" Pod="calico-kube-controllers-bff6bbb7-h6q6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0", GenerateName:"calico-kube-controllers-bff6bbb7-", Namespace:"calico-system", SelfLink:"", UID:"cf845328-8306-43d5-9593-6d711b68c954", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bff6bbb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-bff6bbb7-h6q6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c9b1423f76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:32.145985 containerd[1454]: 2025-01-13 21:30:32.128 [INFO][4490] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" Namespace="calico-system" Pod="calico-kube-controllers-bff6bbb7-h6q6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:32.145985 containerd[1454]: 2025-01-13 21:30:32.128 [INFO][4490] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c9b1423f76 ContainerID="7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" Namespace="calico-system" Pod="calico-kube-controllers-bff6bbb7-h6q6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:32.145985 containerd[1454]: 2025-01-13 21:30:32.130 [INFO][4490] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" Namespace="calico-system" Pod="calico-kube-controllers-bff6bbb7-h6q6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:32.145985 containerd[1454]: 2025-01-13 21:30:32.131 [INFO][4490] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" Namespace="calico-system" Pod="calico-kube-controllers-bff6bbb7-h6q6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0", GenerateName:"calico-kube-controllers-bff6bbb7-", Namespace:"calico-system", SelfLink:"", UID:"cf845328-8306-43d5-9593-6d711b68c954", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bff6bbb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75", Pod:"calico-kube-controllers-bff6bbb7-h6q6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c9b1423f76", MAC:"ba:51:9f:b7:5f:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:32.145985 containerd[1454]: 2025-01-13 21:30:32.141 [INFO][4490] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75" Namespace="calico-system" Pod="calico-kube-controllers-bff6bbb7-h6q6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:32.192114 systemd-networkd[1400]: cali75c1670c600: Link UP Jan 13 21:30:32.195366 systemd-networkd[1400]: cali75c1670c600: Gained carrier Jan 13 21:30:32.204872 containerd[1454]: time="2025-01-13T21:30:32.203651175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:32.204872 containerd[1454]: time="2025-01-13T21:30:32.203730614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:32.204872 containerd[1454]: time="2025-01-13T21:30:32.203748918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:32.204872 containerd[1454]: time="2025-01-13T21:30:32.203856200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:32.227337 systemd[1]: Started cri-containerd-7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75.scope - libcontainer container 7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75. Jan 13 21:30:32.241159 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:30:32.264871 containerd[1454]: time="2025-01-13T21:30:32.264824823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bff6bbb7-h6q6q,Uid:cf845328-8306-43d5-9593-6d711b68c954,Namespace:calico-system,Attempt:1,} returns sandbox id \"7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75\"" Jan 13 21:30:32.266310 containerd[1454]: time="2025-01-13T21:30:32.266268364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:31.815 [INFO][4508] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0 coredns-6f6b679f8f- kube-system e05a8b22-8ec1-444b-8447-61ff0cfe127a 902 0 2025-01-13 21:29:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-ffhfs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali75c1670c600 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" Namespace="kube-system" Pod="coredns-6f6b679f8f-ffhfs" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ffhfs-" Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:31.815 [INFO][4508] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" Namespace="kube-system" Pod="coredns-6f6b679f8f-ffhfs" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:31.865 [INFO][4548] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" HandleID="k8s-pod-network.f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" Workload="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.056 [INFO][4548] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" HandleID="k8s-pod-network.f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" Workload="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b3860), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-ffhfs", "timestamp":"2025-01-13 21:30:31.865121226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.056 [INFO][4548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.123 [INFO][4548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.123 [INFO][4548] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.128 [INFO][4548] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" host="localhost" Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.156 [INFO][4548] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.162 [INFO][4548] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.164 [INFO][4548] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.168 [INFO][4548] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.169 [INFO][4548] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" host="localhost" Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.170 [INFO][4548] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47 Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.178 [INFO][4548] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" host="localhost" Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.184 [INFO][4548] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" host="localhost" Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.184 [INFO][4548] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" host="localhost" Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.184 [INFO][4548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:32.372412 containerd[1454]: 2025-01-13 21:30:32.184 [INFO][4548] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" HandleID="k8s-pod-network.f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" Workload="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:32.373430 containerd[1454]: 2025-01-13 21:30:32.188 [INFO][4508] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" Namespace="kube-system" Pod="coredns-6f6b679f8f-ffhfs" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e05a8b22-8ec1-444b-8447-61ff0cfe127a", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-ffhfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75c1670c600", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:32.373430 containerd[1454]: 2025-01-13 21:30:32.188 [INFO][4508] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" Namespace="kube-system" Pod="coredns-6f6b679f8f-ffhfs" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:32.373430 containerd[1454]: 2025-01-13 21:30:32.188 [INFO][4508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75c1670c600 ContainerID="f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" Namespace="kube-system" Pod="coredns-6f6b679f8f-ffhfs" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:32.373430 containerd[1454]: 2025-01-13 21:30:32.193 [INFO][4508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" Namespace="kube-system" Pod="coredns-6f6b679f8f-ffhfs" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:32.373430 containerd[1454]: 2025-01-13 21:30:32.196 [INFO][4508] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" Namespace="kube-system" Pod="coredns-6f6b679f8f-ffhfs" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e05a8b22-8ec1-444b-8447-61ff0cfe127a", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47", Pod:"coredns-6f6b679f8f-ffhfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75c1670c600", MAC:"8e:c2:e0:bd:17:54", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:32.373430 containerd[1454]: 2025-01-13 21:30:32.368 [INFO][4508] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47" Namespace="kube-system" Pod="coredns-6f6b679f8f-ffhfs" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:32.388371 systemd-networkd[1400]: calia2ea1655c9d: Gained IPv6LL Jan 13 21:30:32.543847 containerd[1454]: time="2025-01-13T21:30:32.543740516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:32.543847 containerd[1454]: time="2025-01-13T21:30:32.543816719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:32.543847 containerd[1454]: time="2025-01-13T21:30:32.543831467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:32.544036 containerd[1454]: time="2025-01-13T21:30:32.543948717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:32.559276 containerd[1454]: time="2025-01-13T21:30:32.559010879Z" level=info msg="StopPodSandbox for \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\"" Jan 13 21:30:32.564340 systemd[1]: Started cri-containerd-f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47.scope - libcontainer container f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47. Jan 13 21:30:32.580272 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:30:32.604284 containerd[1454]: time="2025-01-13T21:30:32.604243515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ffhfs,Uid:e05a8b22-8ec1-444b-8447-61ff0cfe127a,Namespace:kube-system,Attempt:1,} returns sandbox id \"f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47\"" Jan 13 21:30:32.604948 kubelet[2501]: E0113 21:30:32.604913 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:32.606533 containerd[1454]: time="2025-01-13T21:30:32.606494072Z" level=info msg="CreateContainer within sandbox \"f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:30:32.644347 systemd-networkd[1400]: vxlan.calico: Gained IPv6LL Jan 13 21:30:32.676659 containerd[1454]: time="2025-01-13T21:30:32.676563300Z" level=info msg="CreateContainer within sandbox \"f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8f4a13f46db11dec31fc03a37389477088dc1d14195eb4bfb5a41fcecbe2b16\"" Jan 13 21:30:32.677394 containerd[1454]: time="2025-01-13T21:30:32.677275998Z" level=info msg="StartContainer for \"a8f4a13f46db11dec31fc03a37389477088dc1d14195eb4bfb5a41fcecbe2b16\"" Jan 13 21:30:32.718333 systemd[1]: Started cri-containerd-a8f4a13f46db11dec31fc03a37389477088dc1d14195eb4bfb5a41fcecbe2b16.scope - libcontainer container a8f4a13f46db11dec31fc03a37389477088dc1d14195eb4bfb5a41fcecbe2b16. Jan 13 21:30:32.761801 containerd[1454]: 2025-01-13 21:30:32.659 [INFO][4699] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:32.761801 containerd[1454]: 2025-01-13 21:30:32.659 [INFO][4699] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" iface="eth0" netns="/var/run/netns/cni-60860645-333f-fee5-2c36-2648ce864c64" Jan 13 21:30:32.761801 containerd[1454]: 2025-01-13 21:30:32.660 [INFO][4699] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" iface="eth0" netns="/var/run/netns/cni-60860645-333f-fee5-2c36-2648ce864c64" Jan 13 21:30:32.761801 containerd[1454]: 2025-01-13 21:30:32.660 [INFO][4699] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" iface="eth0" netns="/var/run/netns/cni-60860645-333f-fee5-2c36-2648ce864c64" Jan 13 21:30:32.761801 containerd[1454]: 2025-01-13 21:30:32.660 [INFO][4699] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:32.761801 containerd[1454]: 2025-01-13 21:30:32.660 [INFO][4699] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:32.761801 containerd[1454]: 2025-01-13 21:30:32.689 [INFO][4719] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" HandleID="k8s-pod-network.53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Workload="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:32.761801 containerd[1454]: 2025-01-13 21:30:32.690 [INFO][4719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:32.761801 containerd[1454]: 2025-01-13 21:30:32.690 [INFO][4719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:32.761801 containerd[1454]: 2025-01-13 21:30:32.755 [WARNING][4719] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" HandleID="k8s-pod-network.53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Workload="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:32.761801 containerd[1454]: 2025-01-13 21:30:32.755 [INFO][4719] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" HandleID="k8s-pod-network.53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Workload="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:32.761801 containerd[1454]: 2025-01-13 21:30:32.757 [INFO][4719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:32.761801 containerd[1454]: 2025-01-13 21:30:32.759 [INFO][4699] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:32.762554 containerd[1454]: time="2025-01-13T21:30:32.761946532Z" level=info msg="TearDown network for sandbox \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\" successfully" Jan 13 21:30:32.762554 containerd[1454]: time="2025-01-13T21:30:32.761974745Z" level=info msg="StopPodSandbox for \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\" returns successfully" Jan 13 21:30:32.762795 containerd[1454]: time="2025-01-13T21:30:32.762750843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77576767-dnhjp,Uid:a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:30:32.764685 systemd[1]: run-netns-cni\x2d60860645\x2d333f\x2dfee5\x2d2c36\x2d2648ce864c64.mount: Deactivated successfully. Jan 13 21:30:32.777574 containerd[1454]: time="2025-01-13T21:30:32.777532419Z" level=info msg="StartContainer for \"a8f4a13f46db11dec31fc03a37389477088dc1d14195eb4bfb5a41fcecbe2b16\" returns successfully" Jan 13 21:30:32.925815 systemd-networkd[1400]: cali0649fa8a384: Link UP Jan 13 21:30:32.928013 systemd-networkd[1400]: cali0649fa8a384: Gained carrier Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.848 [INFO][4766] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0 calico-apiserver-6c77576767- calico-apiserver a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb 928 0 2025-01-13 21:30:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c77576767 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c77576767-dnhjp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0649fa8a384 [] []}} ContainerID="5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-dnhjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--dnhjp-" Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.848 [INFO][4766] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-dnhjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.881 [INFO][4779] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" HandleID="k8s-pod-network.5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" Workload="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.889 [INFO][4779] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" HandleID="k8s-pod-network.5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" Workload="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000295570), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c77576767-dnhjp", "timestamp":"2025-01-13 21:30:32.88116948 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.889 [INFO][4779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.889 [INFO][4779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.889 [INFO][4779] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.891 [INFO][4779] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" host="localhost" Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.894 [INFO][4779] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.898 [INFO][4779] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.900 [INFO][4779] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.902 [INFO][4779] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.902 [INFO][4779] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" host="localhost" Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.904 [INFO][4779] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.909 [INFO][4779] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" host="localhost" Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.918 [INFO][4779] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" host="localhost" Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.918 [INFO][4779] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" host="localhost" Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.918 [INFO][4779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:32.983783 containerd[1454]: 2025-01-13 21:30:32.919 [INFO][4779] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" HandleID="k8s-pod-network.5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" Workload="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:32.984352 containerd[1454]: 2025-01-13 21:30:32.922 [INFO][4766] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-dnhjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0", GenerateName:"calico-apiserver-6c77576767-", Namespace:"calico-apiserver", SelfLink:"", UID:"a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77576767", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c77576767-dnhjp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0649fa8a384", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:32.984352 containerd[1454]: 2025-01-13 21:30:32.923 [INFO][4766] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-dnhjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:32.984352 containerd[1454]: 2025-01-13 21:30:32.923 [INFO][4766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0649fa8a384 ContainerID="5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-dnhjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:32.984352 containerd[1454]: 2025-01-13 21:30:32.928 [INFO][4766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-dnhjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:32.984352 containerd[1454]: 2025-01-13 21:30:32.928 [INFO][4766] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-dnhjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0", GenerateName:"calico-apiserver-6c77576767-", Namespace:"calico-apiserver", SelfLink:"", UID:"a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77576767", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba", Pod:"calico-apiserver-6c77576767-dnhjp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0649fa8a384", MAC:"ba:23:2c:db:ae:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:32.984352 containerd[1454]: 2025-01-13 21:30:32.978 [INFO][4766] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c77576767-dnhjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:33.020883 containerd[1454]: time="2025-01-13T21:30:33.020658010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:33.021453 containerd[1454]: time="2025-01-13T21:30:33.021174290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:33.021512 containerd[1454]: time="2025-01-13T21:30:33.021279167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:33.022552 containerd[1454]: time="2025-01-13T21:30:33.022396024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:33.028420 kubelet[2501]: E0113 21:30:33.028382 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:33.028631 kubelet[2501]: E0113 21:30:33.028555 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:33.028631 kubelet[2501]: I0113 21:30:33.028769 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:30:33.048939 systemd[1]: Started cri-containerd-5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba.scope - libcontainer container 5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba. Jan 13 21:30:33.063335 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:30:33.089530 containerd[1454]: time="2025-01-13T21:30:33.089471045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77576767-dnhjp,Uid:a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba\"" Jan 13 21:30:33.092784 containerd[1454]: time="2025-01-13T21:30:33.092722282Z" level=info msg="CreateContainer within sandbox \"5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:30:33.108309 kubelet[2501]: I0113 21:30:33.108138 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ffhfs" podStartSLOduration=37.107939162 podStartE2EDuration="37.107939162s" podCreationTimestamp="2025-01-13 21:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:30:33.107356587 +0000 UTC m=+42.625181331" watchObservedRunningTime="2025-01-13 21:30:33.107939162 +0000 UTC m=+42.625763906" Jan 13 21:30:33.236325 containerd[1454]: time="2025-01-13T21:30:33.234440907Z" level=info msg="CreateContainer within sandbox \"5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0b2a96829d1d849a94f768dc1a0094567565628f472d9ed18c2a5cebfe479868\"" Jan 13 21:30:33.236325 containerd[1454]: time="2025-01-13T21:30:33.235316302Z" level=info msg="StartContainer for \"0b2a96829d1d849a94f768dc1a0094567565628f472d9ed18c2a5cebfe479868\"" Jan 13 21:30:33.268361 systemd[1]: Started cri-containerd-0b2a96829d1d849a94f768dc1a0094567565628f472d9ed18c2a5cebfe479868.scope - libcontainer container 0b2a96829d1d849a94f768dc1a0094567565628f472d9ed18c2a5cebfe479868. Jan 13 21:30:33.306683 containerd[1454]: time="2025-01-13T21:30:33.306631456Z" level=info msg="StartContainer for \"0b2a96829d1d849a94f768dc1a0094567565628f472d9ed18c2a5cebfe479868\" returns successfully" Jan 13 21:30:33.348348 systemd-networkd[1400]: cali75c1670c600: Gained IPv6LL Jan 13 21:30:33.476321 systemd-networkd[1400]: cali7c9b1423f76: Gained IPv6LL Jan 13 21:30:34.033139 kubelet[2501]: E0113 21:30:34.032880 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:34.033704 kubelet[2501]: E0113 21:30:34.033249 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:34.045087 kubelet[2501]: I0113 21:30:34.045020 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c77576767-dnhjp" podStartSLOduration=32.045004462 podStartE2EDuration="32.045004462s" podCreationTimestamp="2025-01-13 21:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:30:34.044436856 +0000 UTC m=+43.562261590" watchObservedRunningTime="2025-01-13 21:30:34.045004462 +0000 UTC m=+43.562829196" Jan 13 21:30:34.628427 systemd-networkd[1400]: cali0649fa8a384: Gained IPv6LL Jan 13 21:30:34.763151 containerd[1454]: time="2025-01-13T21:30:34.763087359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:34.763953 containerd[1454]: time="2025-01-13T21:30:34.763912619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 21:30:34.765739 containerd[1454]: time="2025-01-13T21:30:34.765685639Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:34.768138 containerd[1454]: time="2025-01-13T21:30:34.768102287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:34.768763 containerd[1454]: time="2025-01-13T21:30:34.768706462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.502401919s" Jan 13 21:30:34.768763 containerd[1454]: time="2025-01-13T21:30:34.768759872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 21:30:34.779642 containerd[1454]: time="2025-01-13T21:30:34.779476679Z" level=info msg="CreateContainer within sandbox \"7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:30:34.793776 containerd[1454]: time="2025-01-13T21:30:34.793736689Z" level=info msg="CreateContainer within sandbox \"7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"20cb8d1e8cd98f5c65065564c23b552b145e24c82c7d352994035bb4b09fa8e5\"" Jan 13 21:30:34.794467 containerd[1454]: time="2025-01-13T21:30:34.794434530Z" level=info msg="StartContainer for \"20cb8d1e8cd98f5c65065564c23b552b145e24c82c7d352994035bb4b09fa8e5\"" Jan 13 21:30:34.822333 systemd[1]: Started cri-containerd-20cb8d1e8cd98f5c65065564c23b552b145e24c82c7d352994035bb4b09fa8e5.scope - libcontainer container 20cb8d1e8cd98f5c65065564c23b552b145e24c82c7d352994035bb4b09fa8e5. Jan 13 21:30:34.864894 containerd[1454]: time="2025-01-13T21:30:34.864836967Z" level=info msg="StartContainer for \"20cb8d1e8cd98f5c65065564c23b552b145e24c82c7d352994035bb4b09fa8e5\" returns successfully" Jan 13 21:30:35.036087 kubelet[2501]: I0113 21:30:35.036058 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:30:35.036557 kubelet[2501]: E0113 21:30:35.036440 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:35.037230 kubelet[2501]: E0113 21:30:35.036615 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:35.044835 kubelet[2501]: I0113 21:30:35.044785 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-bff6bbb7-h6q6q" podStartSLOduration=30.541225564 podStartE2EDuration="33.044769528s" podCreationTimestamp="2025-01-13 21:30:02 +0000 UTC" firstStartedPulling="2025-01-13 21:30:32.266018876 +0000 UTC m=+41.783843600" lastFinishedPulling="2025-01-13 21:30:34.76956283 +0000 UTC m=+44.287387564" observedRunningTime="2025-01-13 21:30:35.044370539 +0000 UTC m=+44.562195273" watchObservedRunningTime="2025-01-13 21:30:35.044769528 +0000 UTC m=+44.562594262" Jan 13 21:30:35.556481 containerd[1454]: time="2025-01-13T21:30:35.556414394Z" level=info msg="StopPodSandbox for \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\"" Jan 13 21:30:35.636015 containerd[1454]: 2025-01-13 21:30:35.602 [INFO][4946] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:35.636015 containerd[1454]: 2025-01-13 21:30:35.602 [INFO][4946] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" iface="eth0" netns="/var/run/netns/cni-5d58144b-657d-48b4-83a6-c76aa0c6a349" Jan 13 21:30:35.636015 containerd[1454]: 2025-01-13 21:30:35.603 [INFO][4946] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" iface="eth0" netns="/var/run/netns/cni-5d58144b-657d-48b4-83a6-c76aa0c6a349" Jan 13 21:30:35.636015 containerd[1454]: 2025-01-13 21:30:35.603 [INFO][4946] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" iface="eth0" netns="/var/run/netns/cni-5d58144b-657d-48b4-83a6-c76aa0c6a349" Jan 13 21:30:35.636015 containerd[1454]: 2025-01-13 21:30:35.603 [INFO][4946] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:35.636015 containerd[1454]: 2025-01-13 21:30:35.603 [INFO][4946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:35.636015 containerd[1454]: 2025-01-13 21:30:35.624 [INFO][4954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" HandleID="k8s-pod-network.6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Workload="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:35.636015 containerd[1454]: 2025-01-13 21:30:35.624 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:35.636015 containerd[1454]: 2025-01-13 21:30:35.625 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:35.636015 containerd[1454]: 2025-01-13 21:30:35.629 [WARNING][4954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" HandleID="k8s-pod-network.6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Workload="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:35.636015 containerd[1454]: 2025-01-13 21:30:35.629 [INFO][4954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" HandleID="k8s-pod-network.6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Workload="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:35.636015 containerd[1454]: 2025-01-13 21:30:35.631 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:35.636015 containerd[1454]: 2025-01-13 21:30:35.633 [INFO][4946] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:35.636412 containerd[1454]: time="2025-01-13T21:30:35.636208637Z" level=info msg="TearDown network for sandbox \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\" successfully" Jan 13 21:30:35.636412 containerd[1454]: time="2025-01-13T21:30:35.636237471Z" level=info msg="StopPodSandbox for \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\" returns successfully" Jan 13 21:30:35.636925 containerd[1454]: time="2025-01-13T21:30:35.636890787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8rhh,Uid:e11df133-a251-4390-b19c-decc83ce2384,Namespace:calico-system,Attempt:1,}" Jan 13 21:30:35.776902 systemd[1]: run-netns-cni\x2d5d58144b\x2d657d\x2d48b4\x2d83a6\x2dc76aa0c6a349.mount: Deactivated successfully. Jan 13 21:30:35.901888 systemd-networkd[1400]: cali5209268b575: Link UP Jan 13 21:30:35.902086 systemd-networkd[1400]: cali5209268b575: Gained carrier Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.677 [INFO][4962] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s8rhh-eth0 csi-node-driver- calico-system e11df133-a251-4390-b19c-decc83ce2384 980 0 2025-01-13 21:30:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-s8rhh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5209268b575 [] []}} ContainerID="88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" Namespace="calico-system" Pod="csi-node-driver-s8rhh" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8rhh-" Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.677 [INFO][4962] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" Namespace="calico-system" Pod="csi-node-driver-s8rhh" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.704 [INFO][4976] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" HandleID="k8s-pod-network.88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" Workload="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.711 [INFO][4976] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" HandleID="k8s-pod-network.88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" Workload="localhost-k8s-csi--node--driver--s8rhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5de0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s8rhh", "timestamp":"2025-01-13 21:30:35.704111438 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.711 [INFO][4976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.711 [INFO][4976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.711 [INFO][4976] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.712 [INFO][4976] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" host="localhost" Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.715 [INFO][4976] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.719 [INFO][4976] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.721 [INFO][4976] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.723 [INFO][4976] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.723 [INFO][4976] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" host="localhost" Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.724 [INFO][4976] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1 Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.741 [INFO][4976] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" host="localhost" Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.896 [INFO][4976] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" host="localhost" Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.896 [INFO][4976] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" host="localhost" Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.896 [INFO][4976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:35.949272 containerd[1454]: 2025-01-13 21:30:35.896 [INFO][4976] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" HandleID="k8s-pod-network.88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" Workload="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:35.950236 containerd[1454]: 2025-01-13 21:30:35.899 [INFO][4962] cni-plugin/k8s.go 386: Populated endpoint ContainerID="88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" Namespace="calico-system" Pod="csi-node-driver-s8rhh" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8rhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s8rhh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e11df133-a251-4390-b19c-decc83ce2384", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s8rhh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5209268b575", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:35.950236 containerd[1454]: 2025-01-13 21:30:35.899 [INFO][4962] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" Namespace="calico-system" Pod="csi-node-driver-s8rhh" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:35.950236 containerd[1454]: 2025-01-13 21:30:35.899 [INFO][4962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5209268b575 ContainerID="88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" Namespace="calico-system" Pod="csi-node-driver-s8rhh" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:35.950236 containerd[1454]: 2025-01-13 21:30:35.901 [INFO][4962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" Namespace="calico-system" Pod="csi-node-driver-s8rhh" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:35.950236 containerd[1454]: 2025-01-13 21:30:35.902 [INFO][4962] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" Namespace="calico-system" Pod="csi-node-driver-s8rhh" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8rhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s8rhh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e11df133-a251-4390-b19c-decc83ce2384", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1", Pod:"csi-node-driver-s8rhh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5209268b575", MAC:"32:15:61:a5:5c:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:35.950236 containerd[1454]: 2025-01-13 21:30:35.946 [INFO][4962] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1" Namespace="calico-system" Pod="csi-node-driver-s8rhh" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:36.025051 containerd[1454]: time="2025-01-13T21:30:36.024920147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:36.025051 containerd[1454]: time="2025-01-13T21:30:36.024993826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:36.025208 containerd[1454]: time="2025-01-13T21:30:36.025042046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:36.025851 containerd[1454]: time="2025-01-13T21:30:36.025776836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:36.038329 kubelet[2501]: I0113 21:30:36.037986 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:30:36.057328 systemd[1]: Started cri-containerd-88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1.scope - libcontainer container 88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1. Jan 13 21:30:36.062744 systemd[1]: Started sshd@12-10.0.0.148:22-10.0.0.1:48564.service - OpenSSH per-connection server daemon (10.0.0.1:48564). Jan 13 21:30:36.076639 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:30:36.088751 containerd[1454]: time="2025-01-13T21:30:36.088701810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8rhh,Uid:e11df133-a251-4390-b19c-decc83ce2384,Namespace:calico-system,Attempt:1,} returns sandbox id \"88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1\"" Jan 13 21:30:36.090159 containerd[1454]: time="2025-01-13T21:30:36.090127207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:30:36.127301 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 48564 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:36.129276 sshd[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:36.133339 systemd-logind[1437]: New session 13 of user core. Jan 13 21:30:36.142352 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:30:36.265067 sshd[5035]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:36.277176 systemd[1]: sshd@12-10.0.0.148:22-10.0.0.1:48564.service: Deactivated successfully. Jan 13 21:30:36.279084 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:30:36.280705 systemd-logind[1437]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:30:36.282019 systemd[1]: Started sshd@13-10.0.0.148:22-10.0.0.1:48566.service - OpenSSH per-connection server daemon (10.0.0.1:48566). Jan 13 21:30:36.282827 systemd-logind[1437]: Removed session 13. Jan 13 21:30:36.318903 sshd[5057]: Accepted publickey for core from 10.0.0.1 port 48566 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:36.320511 sshd[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:36.324289 systemd-logind[1437]: New session 14 of user core. Jan 13 21:30:36.334303 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:30:36.472385 sshd[5057]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:36.482510 systemd[1]: sshd@13-10.0.0.148:22-10.0.0.1:48566.service: Deactivated successfully. Jan 13 21:30:36.485965 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:30:36.487910 systemd-logind[1437]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:30:36.501515 systemd[1]: Started sshd@14-10.0.0.148:22-10.0.0.1:48576.service - OpenSSH per-connection server daemon (10.0.0.1:48576). Jan 13 21:30:36.502227 systemd-logind[1437]: Removed session 14. Jan 13 21:30:36.533058 sshd[5069]: Accepted publickey for core from 10.0.0.1 port 48576 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:36.534650 sshd[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:36.538799 systemd-logind[1437]: New session 15 of user core. Jan 13 21:30:36.547314 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:30:36.941259 sshd[5069]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:36.946468 systemd[1]: sshd@14-10.0.0.148:22-10.0.0.1:48576.service: Deactivated successfully. Jan 13 21:30:36.949810 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:30:36.951087 systemd-logind[1437]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:30:36.952451 systemd-logind[1437]: Removed session 15. Jan 13 21:30:37.892438 systemd-networkd[1400]: cali5209268b575: Gained IPv6LL Jan 13 21:30:38.442911 containerd[1454]: time="2025-01-13T21:30:38.442828897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:38.484424 containerd[1454]: time="2025-01-13T21:30:38.484327496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:30:38.499056 containerd[1454]: time="2025-01-13T21:30:38.498998220Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:38.501836 containerd[1454]: time="2025-01-13T21:30:38.501788578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:38.502308 containerd[1454]: time="2025-01-13T21:30:38.502269330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.412083073s" Jan 13 21:30:38.502308 containerd[1454]: time="2025-01-13T21:30:38.502304937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:30:38.504305 containerd[1454]: time="2025-01-13T21:30:38.504267141Z" level=info msg="CreateContainer within sandbox \"88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:30:38.527501 containerd[1454]: time="2025-01-13T21:30:38.527449865Z" level=info msg="CreateContainer within sandbox \"88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d4ec6f2af081a2bbd30779d14a95ce9fac7b452679e2e9d9c10f2b9ea4baf0b7\"" Jan 13 21:30:38.527962 containerd[1454]: time="2025-01-13T21:30:38.527930066Z" level=info msg="StartContainer for \"d4ec6f2af081a2bbd30779d14a95ce9fac7b452679e2e9d9c10f2b9ea4baf0b7\"" Jan 13 21:30:38.560343 systemd[1]: Started cri-containerd-d4ec6f2af081a2bbd30779d14a95ce9fac7b452679e2e9d9c10f2b9ea4baf0b7.scope - libcontainer container d4ec6f2af081a2bbd30779d14a95ce9fac7b452679e2e9d9c10f2b9ea4baf0b7. Jan 13 21:30:38.607742 containerd[1454]: time="2025-01-13T21:30:38.607679971Z" level=info msg="StartContainer for \"d4ec6f2af081a2bbd30779d14a95ce9fac7b452679e2e9d9c10f2b9ea4baf0b7\" returns successfully" Jan 13 21:30:38.609014 containerd[1454]: time="2025-01-13T21:30:38.608958722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:30:39.943681 containerd[1454]: time="2025-01-13T21:30:39.943616049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:39.944311 containerd[1454]: time="2025-01-13T21:30:39.944261250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:30:39.945348 containerd[1454]: time="2025-01-13T21:30:39.945313856Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:39.947492 containerd[1454]: time="2025-01-13T21:30:39.947463241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:39.948157 containerd[1454]: time="2025-01-13T21:30:39.948122860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.339114284s" Jan 13 21:30:39.948157 containerd[1454]: time="2025-01-13T21:30:39.948153176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:30:39.950176 containerd[1454]: time="2025-01-13T21:30:39.950143974Z" level=info msg="CreateContainer within sandbox \"88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:30:39.964984 containerd[1454]: time="2025-01-13T21:30:39.964944438Z" level=info msg="CreateContainer within sandbox \"88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8cf4e54d0e6987893efdf393a219e9347306bdd77a0ab58559e1c796bb3be484\"" Jan 13 21:30:39.965705 containerd[1454]: time="2025-01-13T21:30:39.965639644Z" level=info msg="StartContainer for \"8cf4e54d0e6987893efdf393a219e9347306bdd77a0ab58559e1c796bb3be484\"" Jan 13 21:30:40.040378 systemd[1]: Started cri-containerd-8cf4e54d0e6987893efdf393a219e9347306bdd77a0ab58559e1c796bb3be484.scope - libcontainer container 8cf4e54d0e6987893efdf393a219e9347306bdd77a0ab58559e1c796bb3be484. Jan 13 21:30:40.070347 containerd[1454]: time="2025-01-13T21:30:40.070304181Z" level=info msg="StartContainer for \"8cf4e54d0e6987893efdf393a219e9347306bdd77a0ab58559e1c796bb3be484\" returns successfully" Jan 13 21:30:40.620793 kubelet[2501]: I0113 21:30:40.620746 2501 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:30:40.620793 kubelet[2501]: I0113 21:30:40.620781 2501 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:30:41.065386 kubelet[2501]: I0113 21:30:41.065332 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s8rhh" podStartSLOduration=35.206227223 podStartE2EDuration="39.065317551s" podCreationTimestamp="2025-01-13 21:30:02 +0000 UTC" firstStartedPulling="2025-01-13 21:30:36.089894219 +0000 UTC m=+45.607718943" lastFinishedPulling="2025-01-13 21:30:39.948984536 +0000 UTC m=+49.466809271" observedRunningTime="2025-01-13 21:30:41.064050693 +0000 UTC m=+50.581875427" watchObservedRunningTime="2025-01-13 21:30:41.065317551 +0000 UTC m=+50.583142285" Jan 13 21:30:41.952235 systemd[1]: Started sshd@15-10.0.0.148:22-10.0.0.1:49966.service - OpenSSH per-connection server daemon (10.0.0.1:49966). Jan 13 21:30:41.995029 sshd[5223]: Accepted publickey for core from 10.0.0.1 port 49966 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:41.996601 sshd[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:42.000389 systemd-logind[1437]: New session 16 of user core. Jan 13 21:30:42.019324 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:30:42.136286 sshd[5223]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:42.140485 systemd[1]: sshd@15-10.0.0.148:22-10.0.0.1:49966.service: Deactivated successfully. Jan 13 21:30:42.142572 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:30:42.143238 systemd-logind[1437]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:30:42.144045 systemd-logind[1437]: Removed session 16. Jan 13 21:30:47.146954 systemd[1]: Started sshd@16-10.0.0.148:22-10.0.0.1:49980.service - OpenSSH per-connection server daemon (10.0.0.1:49980). Jan 13 21:30:47.185104 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 49980 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:47.186793 sshd[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:47.191108 systemd-logind[1437]: New session 17 of user core. Jan 13 21:30:47.199335 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:30:47.312263 sshd[5239]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:47.316684 systemd[1]: sshd@16-10.0.0.148:22-10.0.0.1:49980.service: Deactivated successfully. Jan 13 21:30:47.318535 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:30:47.319163 systemd-logind[1437]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:30:47.320144 systemd-logind[1437]: Removed session 17. Jan 13 21:30:50.550648 containerd[1454]: time="2025-01-13T21:30:50.550593717Z" level=info msg="StopPodSandbox for \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\"" Jan 13 21:30:50.614368 containerd[1454]: 2025-01-13 21:30:50.581 [WARNING][5267] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e05a8b22-8ec1-444b-8447-61ff0cfe127a", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47", Pod:"coredns-6f6b679f8f-ffhfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75c1670c600", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:50.614368 containerd[1454]: 2025-01-13 21:30:50.582 [INFO][5267] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:50.614368 containerd[1454]: 2025-01-13 21:30:50.582 [INFO][5267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" iface="eth0" netns="" Jan 13 21:30:50.614368 containerd[1454]: 2025-01-13 21:30:50.582 [INFO][5267] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:50.614368 containerd[1454]: 2025-01-13 21:30:50.582 [INFO][5267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:50.614368 containerd[1454]: 2025-01-13 21:30:50.603 [INFO][5276] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" HandleID="k8s-pod-network.1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Workload="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:50.614368 containerd[1454]: 2025-01-13 21:30:50.603 [INFO][5276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:50.614368 containerd[1454]: 2025-01-13 21:30:50.603 [INFO][5276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:50.614368 containerd[1454]: 2025-01-13 21:30:50.608 [WARNING][5276] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" HandleID="k8s-pod-network.1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Workload="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:50.614368 containerd[1454]: 2025-01-13 21:30:50.608 [INFO][5276] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" HandleID="k8s-pod-network.1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Workload="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:50.614368 containerd[1454]: 2025-01-13 21:30:50.609 [INFO][5276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:50.614368 containerd[1454]: 2025-01-13 21:30:50.611 [INFO][5267] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:50.614903 containerd[1454]: time="2025-01-13T21:30:50.614478885Z" level=info msg="TearDown network for sandbox \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\" successfully" Jan 13 21:30:50.614903 containerd[1454]: time="2025-01-13T21:30:50.614503942Z" level=info msg="StopPodSandbox for \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\" returns successfully" Jan 13 21:30:50.620948 containerd[1454]: time="2025-01-13T21:30:50.620919119Z" level=info msg="RemovePodSandbox for \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\"" Jan 13 21:30:50.623646 containerd[1454]: time="2025-01-13T21:30:50.623614106Z" level=info msg="Forcibly stopping sandbox \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\"" Jan 13 21:30:50.681571 containerd[1454]: 2025-01-13 21:30:50.653 [WARNING][5298] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e05a8b22-8ec1-444b-8447-61ff0cfe127a", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f02cd798524674c94b71d24519cfc24c13226e47b90c99a1a1914490e0e19f47", Pod:"coredns-6f6b679f8f-ffhfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75c1670c600", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:50.681571 containerd[1454]: 2025-01-13 21:30:50.654 [INFO][5298] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:50.681571 containerd[1454]: 2025-01-13 21:30:50.654 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" iface="eth0" netns="" Jan 13 21:30:50.681571 containerd[1454]: 2025-01-13 21:30:50.654 [INFO][5298] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:50.681571 containerd[1454]: 2025-01-13 21:30:50.654 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:50.681571 containerd[1454]: 2025-01-13 21:30:50.672 [INFO][5305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" HandleID="k8s-pod-network.1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Workload="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:50.681571 containerd[1454]: 2025-01-13 21:30:50.672 [INFO][5305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:50.681571 containerd[1454]: 2025-01-13 21:30:50.672 [INFO][5305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:50.681571 containerd[1454]: 2025-01-13 21:30:50.676 [WARNING][5305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" HandleID="k8s-pod-network.1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Workload="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:50.681571 containerd[1454]: 2025-01-13 21:30:50.676 [INFO][5305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" HandleID="k8s-pod-network.1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Workload="localhost-k8s-coredns--6f6b679f8f--ffhfs-eth0" Jan 13 21:30:50.681571 containerd[1454]: 2025-01-13 21:30:50.677 [INFO][5305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:50.681571 containerd[1454]: 2025-01-13 21:30:50.679 [INFO][5298] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19" Jan 13 21:30:50.681984 containerd[1454]: time="2025-01-13T21:30:50.681593844Z" level=info msg="TearDown network for sandbox \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\" successfully" Jan 13 21:30:50.695925 containerd[1454]: time="2025-01-13T21:30:50.695886501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:30:50.695977 containerd[1454]: time="2025-01-13T21:30:50.695942627Z" level=info msg="RemovePodSandbox \"1c70a17fb861a3e24d7a18059bbcc4dc573f1f68f89e45850beefb9c07f20e19\" returns successfully" Jan 13 21:30:50.696596 containerd[1454]: time="2025-01-13T21:30:50.696550577Z" level=info msg="StopPodSandbox for \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\"" Jan 13 21:30:50.754863 containerd[1454]: 2025-01-13 21:30:50.726 [WARNING][5327] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6c756d2a-245d-4d57-88a3-fb1081dae774", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7", Pod:"coredns-6f6b679f8f-cx8tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2ea1655c9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:50.754863 containerd[1454]: 2025-01-13 21:30:50.726 [INFO][5327] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:50.754863 containerd[1454]: 2025-01-13 21:30:50.726 [INFO][5327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" iface="eth0" netns="" Jan 13 21:30:50.754863 containerd[1454]: 2025-01-13 21:30:50.726 [INFO][5327] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:50.754863 containerd[1454]: 2025-01-13 21:30:50.726 [INFO][5327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:50.754863 containerd[1454]: 2025-01-13 21:30:50.745 [INFO][5334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" HandleID="k8s-pod-network.830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Workload="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:50.754863 containerd[1454]: 2025-01-13 21:30:50.745 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:50.754863 containerd[1454]: 2025-01-13 21:30:50.745 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:50.754863 containerd[1454]: 2025-01-13 21:30:50.749 [WARNING][5334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" HandleID="k8s-pod-network.830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Workload="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:50.754863 containerd[1454]: 2025-01-13 21:30:50.749 [INFO][5334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" HandleID="k8s-pod-network.830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Workload="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:50.754863 containerd[1454]: 2025-01-13 21:30:50.750 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:50.754863 containerd[1454]: 2025-01-13 21:30:50.752 [INFO][5327] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:50.755394 containerd[1454]: time="2025-01-13T21:30:50.754895500Z" level=info msg="TearDown network for sandbox \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\" successfully" Jan 13 21:30:50.755394 containerd[1454]: time="2025-01-13T21:30:50.754922120Z" level=info msg="StopPodSandbox for \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\" returns successfully" Jan 13 21:30:50.755441 containerd[1454]: time="2025-01-13T21:30:50.755416537Z" level=info msg="RemovePodSandbox for \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\"" Jan 13 21:30:50.755466 containerd[1454]: time="2025-01-13T21:30:50.755440031Z" level=info msg="Forcibly stopping sandbox \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\"" Jan 13 21:30:50.818060 containerd[1454]: 2025-01-13 21:30:50.788 [WARNING][5356] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6c756d2a-245d-4d57-88a3-fb1081dae774", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4032b83274437dfaedb29a0364c193547412b5054559277962d9b142dbe57e7", Pod:"coredns-6f6b679f8f-cx8tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2ea1655c9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:50.818060 containerd[1454]: 2025-01-13 21:30:50.788 [INFO][5356] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:50.818060 containerd[1454]: 2025-01-13 21:30:50.788 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" iface="eth0" netns="" Jan 13 21:30:50.818060 containerd[1454]: 2025-01-13 21:30:50.788 [INFO][5356] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:50.818060 containerd[1454]: 2025-01-13 21:30:50.788 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:50.818060 containerd[1454]: 2025-01-13 21:30:50.808 [INFO][5363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" HandleID="k8s-pod-network.830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Workload="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:50.818060 containerd[1454]: 2025-01-13 21:30:50.808 [INFO][5363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:50.818060 containerd[1454]: 2025-01-13 21:30:50.808 [INFO][5363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:50.818060 containerd[1454]: 2025-01-13 21:30:50.812 [WARNING][5363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" HandleID="k8s-pod-network.830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Workload="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:50.818060 containerd[1454]: 2025-01-13 21:30:50.812 [INFO][5363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" HandleID="k8s-pod-network.830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Workload="localhost-k8s-coredns--6f6b679f8f--cx8tp-eth0" Jan 13 21:30:50.818060 containerd[1454]: 2025-01-13 21:30:50.813 [INFO][5363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:50.818060 containerd[1454]: 2025-01-13 21:30:50.815 [INFO][5356] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4" Jan 13 21:30:50.818060 containerd[1454]: time="2025-01-13T21:30:50.818039929Z" level=info msg="TearDown network for sandbox \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\" successfully" Jan 13 21:30:50.822143 containerd[1454]: time="2025-01-13T21:30:50.822099315Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:30:50.822222 containerd[1454]: time="2025-01-13T21:30:50.822155320Z" level=info msg="RemovePodSandbox \"830ebf2e87e8f4829fa53434d276d65e3d71fc466e14dfca2ea527b12c3c05c4\" returns successfully" Jan 13 21:30:50.822667 containerd[1454]: time="2025-01-13T21:30:50.822634800Z" level=info msg="StopPodSandbox for \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\"" Jan 13 21:30:50.882017 containerd[1454]: 2025-01-13 21:30:50.853 [WARNING][5385] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0", GenerateName:"calico-kube-controllers-bff6bbb7-", Namespace:"calico-system", SelfLink:"", UID:"cf845328-8306-43d5-9593-6d711b68c954", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bff6bbb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75", Pod:"calico-kube-controllers-bff6bbb7-h6q6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c9b1423f76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:50.882017 containerd[1454]: 2025-01-13 21:30:50.854 [INFO][5385] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:50.882017 containerd[1454]: 2025-01-13 21:30:50.854 [INFO][5385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" iface="eth0" netns="" Jan 13 21:30:50.882017 containerd[1454]: 2025-01-13 21:30:50.854 [INFO][5385] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:50.882017 containerd[1454]: 2025-01-13 21:30:50.854 [INFO][5385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:50.882017 containerd[1454]: 2025-01-13 21:30:50.871 [INFO][5392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" HandleID="k8s-pod-network.00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Workload="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:50.882017 containerd[1454]: 2025-01-13 21:30:50.871 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:50.882017 containerd[1454]: 2025-01-13 21:30:50.872 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:50.882017 containerd[1454]: 2025-01-13 21:30:50.876 [WARNING][5392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" HandleID="k8s-pod-network.00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Workload="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:50.882017 containerd[1454]: 2025-01-13 21:30:50.876 [INFO][5392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" HandleID="k8s-pod-network.00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Workload="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:50.882017 containerd[1454]: 2025-01-13 21:30:50.877 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:50.882017 containerd[1454]: 2025-01-13 21:30:50.879 [INFO][5385] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:50.882437 containerd[1454]: time="2025-01-13T21:30:50.882057926Z" level=info msg="TearDown network for sandbox \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\" successfully" Jan 13 21:30:50.882437 containerd[1454]: time="2025-01-13T21:30:50.882083213Z" level=info msg="StopPodSandbox for \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\" returns successfully" Jan 13 21:30:50.882558 containerd[1454]: time="2025-01-13T21:30:50.882535832Z" level=info msg="RemovePodSandbox for \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\"" Jan 13 21:30:50.882589 containerd[1454]: time="2025-01-13T21:30:50.882561090Z" level=info msg="Forcibly stopping sandbox \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\"" Jan 13 21:30:50.939746 containerd[1454]: 2025-01-13 21:30:50.912 [WARNING][5414] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0", GenerateName:"calico-kube-controllers-bff6bbb7-", Namespace:"calico-system", SelfLink:"", UID:"cf845328-8306-43d5-9593-6d711b68c954", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bff6bbb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ea2eaf2ea942bae319463bca024144016a5018070b150ed245d74a7f95fdf75", Pod:"calico-kube-controllers-bff6bbb7-h6q6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c9b1423f76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:50.939746 containerd[1454]: 2025-01-13 21:30:50.912 [INFO][5414] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:50.939746 containerd[1454]: 2025-01-13 21:30:50.912 [INFO][5414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" iface="eth0" netns="" Jan 13 21:30:50.939746 containerd[1454]: 2025-01-13 21:30:50.912 [INFO][5414] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:50.939746 containerd[1454]: 2025-01-13 21:30:50.912 [INFO][5414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:50.939746 containerd[1454]: 2025-01-13 21:30:50.930 [INFO][5421] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" HandleID="k8s-pod-network.00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Workload="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:50.939746 containerd[1454]: 2025-01-13 21:30:50.930 [INFO][5421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:50.939746 containerd[1454]: 2025-01-13 21:30:50.930 [INFO][5421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:50.939746 containerd[1454]: 2025-01-13 21:30:50.934 [WARNING][5421] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" HandleID="k8s-pod-network.00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Workload="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:50.939746 containerd[1454]: 2025-01-13 21:30:50.934 [INFO][5421] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" HandleID="k8s-pod-network.00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Workload="localhost-k8s-calico--kube--controllers--bff6bbb7--h6q6q-eth0" Jan 13 21:30:50.939746 containerd[1454]: 2025-01-13 21:30:50.935 [INFO][5421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:50.939746 containerd[1454]: 2025-01-13 21:30:50.937 [INFO][5414] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64" Jan 13 21:30:50.940138 containerd[1454]: time="2025-01-13T21:30:50.939763630Z" level=info msg="TearDown network for sandbox \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\" successfully" Jan 13 21:30:50.943774 containerd[1454]: time="2025-01-13T21:30:50.943748397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:30:50.943820 containerd[1454]: time="2025-01-13T21:30:50.943791457Z" level=info msg="RemovePodSandbox \"00472fd1d48191d9d623baae70da3b706b5656c3f6136640f5927c3f05f0dc64\" returns successfully" Jan 13 21:30:50.944293 containerd[1454]: time="2025-01-13T21:30:50.944250709Z" level=info msg="StopPodSandbox for \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\"" Jan 13 21:30:51.005767 containerd[1454]: 2025-01-13 21:30:50.975 [WARNING][5443] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0", GenerateName:"calico-apiserver-6c77576767-", Namespace:"calico-apiserver", SelfLink:"", UID:"b087af71-417d-412a-8572-efae53d551a9", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77576767", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1", Pod:"calico-apiserver-6c77576767-x5sch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8d099badae7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:51.005767 containerd[1454]: 2025-01-13 21:30:50.977 [INFO][5443] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:51.005767 containerd[1454]: 2025-01-13 21:30:50.977 [INFO][5443] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" iface="eth0" netns="" Jan 13 21:30:51.005767 containerd[1454]: 2025-01-13 21:30:50.977 [INFO][5443] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:51.005767 containerd[1454]: 2025-01-13 21:30:50.977 [INFO][5443] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:51.005767 containerd[1454]: 2025-01-13 21:30:50.996 [INFO][5457] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" HandleID="k8s-pod-network.b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Workload="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:51.005767 containerd[1454]: 2025-01-13 21:30:50.996 [INFO][5457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:51.005767 containerd[1454]: 2025-01-13 21:30:50.996 [INFO][5457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:51.005767 containerd[1454]: 2025-01-13 21:30:51.000 [WARNING][5457] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" HandleID="k8s-pod-network.b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Workload="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:51.005767 containerd[1454]: 2025-01-13 21:30:51.000 [INFO][5457] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" HandleID="k8s-pod-network.b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Workload="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:51.005767 containerd[1454]: 2025-01-13 21:30:51.001 [INFO][5457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:51.005767 containerd[1454]: 2025-01-13 21:30:51.003 [INFO][5443] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:51.006417 containerd[1454]: time="2025-01-13T21:30:51.005791227Z" level=info msg="TearDown network for sandbox \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\" successfully" Jan 13 21:30:51.006417 containerd[1454]: time="2025-01-13T21:30:51.005817506Z" level=info msg="StopPodSandbox for \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\" returns successfully" Jan 13 21:30:51.006417 containerd[1454]: time="2025-01-13T21:30:51.006360115Z" level=info msg="RemovePodSandbox for \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\"" Jan 13 21:30:51.006417 containerd[1454]: time="2025-01-13T21:30:51.006395020Z" level=info msg="Forcibly stopping sandbox \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\"" Jan 13 21:30:51.064459 containerd[1454]: 2025-01-13 21:30:51.037 [WARNING][5481] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0", GenerateName:"calico-apiserver-6c77576767-", Namespace:"calico-apiserver", SelfLink:"", UID:"b087af71-417d-412a-8572-efae53d551a9", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77576767", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58d9fb534250663f8dd964a6f7e5bf114fe8a0ac0b8edc880d475838b476edc1", Pod:"calico-apiserver-6c77576767-x5sch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8d099badae7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:51.064459 containerd[1454]: 2025-01-13 21:30:51.037 [INFO][5481] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:51.064459 containerd[1454]: 2025-01-13 21:30:51.037 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" iface="eth0" netns="" Jan 13 21:30:51.064459 containerd[1454]: 2025-01-13 21:30:51.037 [INFO][5481] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:51.064459 containerd[1454]: 2025-01-13 21:30:51.037 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:51.064459 containerd[1454]: 2025-01-13 21:30:51.054 [INFO][5488] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" HandleID="k8s-pod-network.b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Workload="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:51.064459 containerd[1454]: 2025-01-13 21:30:51.054 [INFO][5488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:51.064459 containerd[1454]: 2025-01-13 21:30:51.055 [INFO][5488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:51.064459 containerd[1454]: 2025-01-13 21:30:51.059 [WARNING][5488] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" HandleID="k8s-pod-network.b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Workload="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:51.064459 containerd[1454]: 2025-01-13 21:30:51.059 [INFO][5488] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" HandleID="k8s-pod-network.b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Workload="localhost-k8s-calico--apiserver--6c77576767--x5sch-eth0" Jan 13 21:30:51.064459 containerd[1454]: 2025-01-13 21:30:51.059 [INFO][5488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:51.064459 containerd[1454]: 2025-01-13 21:30:51.062 [INFO][5481] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2" Jan 13 21:30:51.064855 containerd[1454]: time="2025-01-13T21:30:51.064498307Z" level=info msg="TearDown network for sandbox \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\" successfully" Jan 13 21:30:51.068220 containerd[1454]: time="2025-01-13T21:30:51.068124800Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:30:51.068220 containerd[1454]: time="2025-01-13T21:30:51.068168563Z" level=info msg="RemovePodSandbox \"b5dc52728d0ba5644fdf9538e392f959c29ec75d50605f06aca425790313afb2\" returns successfully" Jan 13 21:30:51.068877 containerd[1454]: time="2025-01-13T21:30:51.068731539Z" level=info msg="StopPodSandbox for \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\"" Jan 13 21:30:51.130251 containerd[1454]: 2025-01-13 21:30:51.102 [WARNING][5512] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0", GenerateName:"calico-apiserver-6c77576767-", Namespace:"calico-apiserver", SelfLink:"", UID:"a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77576767", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba", Pod:"calico-apiserver-6c77576767-dnhjp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0649fa8a384", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:51.130251 containerd[1454]: 2025-01-13 21:30:51.102 [INFO][5512] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:51.130251 containerd[1454]: 2025-01-13 21:30:51.102 [INFO][5512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" iface="eth0" netns="" Jan 13 21:30:51.130251 containerd[1454]: 2025-01-13 21:30:51.102 [INFO][5512] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:51.130251 containerd[1454]: 2025-01-13 21:30:51.102 [INFO][5512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:51.130251 containerd[1454]: 2025-01-13 21:30:51.120 [INFO][5520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" HandleID="k8s-pod-network.53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Workload="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:51.130251 containerd[1454]: 2025-01-13 21:30:51.120 [INFO][5520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:51.130251 containerd[1454]: 2025-01-13 21:30:51.120 [INFO][5520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:51.130251 containerd[1454]: 2025-01-13 21:30:51.125 [WARNING][5520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" HandleID="k8s-pod-network.53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Workload="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:51.130251 containerd[1454]: 2025-01-13 21:30:51.125 [INFO][5520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" HandleID="k8s-pod-network.53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Workload="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:51.130251 containerd[1454]: 2025-01-13 21:30:51.126 [INFO][5520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:51.130251 containerd[1454]: 2025-01-13 21:30:51.127 [INFO][5512] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:51.130657 containerd[1454]: time="2025-01-13T21:30:51.130289737Z" level=info msg="TearDown network for sandbox \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\" successfully" Jan 13 21:30:51.130657 containerd[1454]: time="2025-01-13T21:30:51.130323781Z" level=info msg="StopPodSandbox for \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\" returns successfully" Jan 13 21:30:51.130744 containerd[1454]: time="2025-01-13T21:30:51.130713793Z" level=info msg="RemovePodSandbox for \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\"" Jan 13 21:30:51.130744 containerd[1454]: time="2025-01-13T21:30:51.130739902Z" level=info msg="Forcibly stopping sandbox \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\"" Jan 13 21:30:51.188988 containerd[1454]: 2025-01-13 21:30:51.161 [WARNING][5542] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0", GenerateName:"calico-apiserver-6c77576767-", Namespace:"calico-apiserver", SelfLink:"", UID:"a78c9dd2-1fdd-4b9c-a54f-1f124cc6cdeb", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77576767", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5aa873e22b10440815971438885236dc3b0b15038d37072bfe07ef270f7139ba", Pod:"calico-apiserver-6c77576767-dnhjp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0649fa8a384", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:51.188988 containerd[1454]: 2025-01-13 21:30:51.162 [INFO][5542] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:51.188988 containerd[1454]: 2025-01-13 21:30:51.162 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" iface="eth0" netns="" Jan 13 21:30:51.188988 containerd[1454]: 2025-01-13 21:30:51.162 [INFO][5542] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:51.188988 containerd[1454]: 2025-01-13 21:30:51.162 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:51.188988 containerd[1454]: 2025-01-13 21:30:51.179 [INFO][5549] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" HandleID="k8s-pod-network.53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Workload="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:51.188988 containerd[1454]: 2025-01-13 21:30:51.179 [INFO][5549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:51.188988 containerd[1454]: 2025-01-13 21:30:51.179 [INFO][5549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:51.188988 containerd[1454]: 2025-01-13 21:30:51.183 [WARNING][5549] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" HandleID="k8s-pod-network.53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Workload="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:51.188988 containerd[1454]: 2025-01-13 21:30:51.183 [INFO][5549] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" HandleID="k8s-pod-network.53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Workload="localhost-k8s-calico--apiserver--6c77576767--dnhjp-eth0" Jan 13 21:30:51.188988 containerd[1454]: 2025-01-13 21:30:51.184 [INFO][5549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:51.188988 containerd[1454]: 2025-01-13 21:30:51.186 [INFO][5542] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d" Jan 13 21:30:51.189420 containerd[1454]: time="2025-01-13T21:30:51.189055747Z" level=info msg="TearDown network for sandbox \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\" successfully" Jan 13 21:30:51.195295 containerd[1454]: time="2025-01-13T21:30:51.192860285Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:30:51.195295 containerd[1454]: time="2025-01-13T21:30:51.192957387Z" level=info msg="RemovePodSandbox \"53254ef75ebf4417dc24e2400e4e4e65b4a8f52230a3511ef0e48d4e8f84531d\" returns successfully" Jan 13 21:30:51.195800 containerd[1454]: time="2025-01-13T21:30:51.195474300Z" level=info msg="StopPodSandbox for \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\"" Jan 13 21:30:51.254453 containerd[1454]: 2025-01-13 21:30:51.226 [WARNING][5572] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s8rhh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e11df133-a251-4390-b19c-decc83ce2384", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1", Pod:"csi-node-driver-s8rhh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5209268b575", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:51.254453 containerd[1454]: 2025-01-13 21:30:51.226 [INFO][5572] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:51.254453 containerd[1454]: 2025-01-13 21:30:51.226 [INFO][5572] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" iface="eth0" netns="" Jan 13 21:30:51.254453 containerd[1454]: 2025-01-13 21:30:51.226 [INFO][5572] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:51.254453 containerd[1454]: 2025-01-13 21:30:51.226 [INFO][5572] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:51.254453 containerd[1454]: 2025-01-13 21:30:51.244 [INFO][5580] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" HandleID="k8s-pod-network.6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Workload="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:51.254453 containerd[1454]: 2025-01-13 21:30:51.244 [INFO][5580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:51.254453 containerd[1454]: 2025-01-13 21:30:51.244 [INFO][5580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:51.254453 containerd[1454]: 2025-01-13 21:30:51.248 [WARNING][5580] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" HandleID="k8s-pod-network.6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Workload="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:51.254453 containerd[1454]: 2025-01-13 21:30:51.248 [INFO][5580] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" HandleID="k8s-pod-network.6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Workload="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:51.254453 containerd[1454]: 2025-01-13 21:30:51.250 [INFO][5580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:51.254453 containerd[1454]: 2025-01-13 21:30:51.252 [INFO][5572] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:51.254988 containerd[1454]: time="2025-01-13T21:30:51.254486010Z" level=info msg="TearDown network for sandbox \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\" successfully" Jan 13 21:30:51.254988 containerd[1454]: time="2025-01-13T21:30:51.254514604Z" level=info msg="StopPodSandbox for \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\" returns successfully" Jan 13 21:30:51.255032 containerd[1454]: time="2025-01-13T21:30:51.254981290Z" level=info msg="RemovePodSandbox for \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\"" Jan 13 21:30:51.255032 containerd[1454]: time="2025-01-13T21:30:51.255005235Z" level=info msg="Forcibly stopping sandbox \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\"" Jan 13 21:30:51.316362 containerd[1454]: 2025-01-13 21:30:51.288 [WARNING][5603] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s8rhh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e11df133-a251-4390-b19c-decc83ce2384", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88acd55099b2b64bb23463010ccc00243e18d42bf9ec04ba2629f084bbd335c1", Pod:"csi-node-driver-s8rhh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5209268b575", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:51.316362 containerd[1454]: 2025-01-13 21:30:51.288 [INFO][5603] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:51.316362 containerd[1454]: 2025-01-13 21:30:51.288 [INFO][5603] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" iface="eth0" netns="" Jan 13 21:30:51.316362 containerd[1454]: 2025-01-13 21:30:51.288 [INFO][5603] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:51.316362 containerd[1454]: 2025-01-13 21:30:51.288 [INFO][5603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:51.316362 containerd[1454]: 2025-01-13 21:30:51.306 [INFO][5611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" HandleID="k8s-pod-network.6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Workload="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:51.316362 containerd[1454]: 2025-01-13 21:30:51.306 [INFO][5611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:51.316362 containerd[1454]: 2025-01-13 21:30:51.306 [INFO][5611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:51.316362 containerd[1454]: 2025-01-13 21:30:51.310 [WARNING][5611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" HandleID="k8s-pod-network.6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Workload="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:51.316362 containerd[1454]: 2025-01-13 21:30:51.310 [INFO][5611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" HandleID="k8s-pod-network.6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Workload="localhost-k8s-csi--node--driver--s8rhh-eth0" Jan 13 21:30:51.316362 containerd[1454]: 2025-01-13 21:30:51.311 [INFO][5611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:51.316362 containerd[1454]: 2025-01-13 21:30:51.314 [INFO][5603] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105" Jan 13 21:30:51.316836 containerd[1454]: time="2025-01-13T21:30:51.316397301Z" level=info msg="TearDown network for sandbox \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\" successfully" Jan 13 21:30:51.320021 containerd[1454]: time="2025-01-13T21:30:51.319942453Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:30:51.320021 containerd[1454]: time="2025-01-13T21:30:51.319984071Z" level=info msg="RemovePodSandbox \"6d583327fafba84bc7da1af8b72e9a41c5d4313ee422ff554efa5be3a79ce105\" returns successfully" Jan 13 21:30:52.324238 systemd[1]: Started sshd@17-10.0.0.148:22-10.0.0.1:36072.service - OpenSSH per-connection server daemon (10.0.0.1:36072). Jan 13 21:30:52.362772 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 36072 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:52.364248 sshd[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:52.367791 systemd-logind[1437]: New session 18 of user core. Jan 13 21:30:52.377332 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:30:52.488261 sshd[5620]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:52.492591 systemd[1]: sshd@17-10.0.0.148:22-10.0.0.1:36072.service: Deactivated successfully. Jan 13 21:30:52.494641 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:30:52.495283 systemd-logind[1437]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:30:52.496101 systemd-logind[1437]: Removed session 18. Jan 13 21:30:55.214511 kubelet[2501]: I0113 21:30:55.214478 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:30:55.565877 kubelet[2501]: E0113 21:30:55.565829 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:57.498904 systemd[1]: Started sshd@18-10.0.0.148:22-10.0.0.1:47746.service - OpenSSH per-connection server daemon (10.0.0.1:47746). Jan 13 21:30:57.535074 sshd[5661]: Accepted publickey for core from 10.0.0.1 port 47746 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:57.536479 sshd[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:57.540298 systemd-logind[1437]: New session 19 of user core. Jan 13 21:30:57.552331 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:30:57.651432 sshd[5661]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:57.654821 systemd[1]: sshd@18-10.0.0.148:22-10.0.0.1:47746.service: Deactivated successfully. Jan 13 21:30:57.656667 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:30:57.657256 systemd-logind[1437]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:30:57.658079 systemd-logind[1437]: Removed session 19. Jan 13 21:30:59.792958 kubelet[2501]: I0113 21:30:59.792851 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:31:01.556639 kubelet[2501]: E0113 21:31:01.556604 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:31:02.663404 systemd[1]: Started sshd@19-10.0.0.148:22-10.0.0.1:47762.service - OpenSSH per-connection server daemon (10.0.0.1:47762). Jan 13 21:31:02.707945 sshd[5678]: Accepted publickey for core from 10.0.0.1 port 47762 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:31:02.709540 sshd[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:02.713749 systemd-logind[1437]: New session 20 of user core. Jan 13 21:31:02.719336 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:31:02.840739 sshd[5678]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:02.850856 systemd[1]: sshd@19-10.0.0.148:22-10.0.0.1:47762.service: Deactivated successfully. Jan 13 21:31:02.852584 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:31:02.854273 systemd-logind[1437]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:31:02.862556 systemd[1]: Started sshd@20-10.0.0.148:22-10.0.0.1:47768.service - OpenSSH per-connection server daemon (10.0.0.1:47768). Jan 13 21:31:02.863497 systemd-logind[1437]: Removed session 20. Jan 13 21:31:02.895006 sshd[5692]: Accepted publickey for core from 10.0.0.1 port 47768 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:31:02.896415 sshd[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:02.900408 systemd-logind[1437]: New session 21 of user core. Jan 13 21:31:02.911409 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:31:03.092899 sshd[5692]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:03.101303 systemd[1]: sshd@20-10.0.0.148:22-10.0.0.1:47768.service: Deactivated successfully. Jan 13 21:31:03.103219 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:31:03.104846 systemd-logind[1437]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:31:03.111418 systemd[1]: Started sshd@21-10.0.0.148:22-10.0.0.1:47776.service - OpenSSH per-connection server daemon (10.0.0.1:47776). Jan 13 21:31:03.112247 systemd-logind[1437]: Removed session 21. Jan 13 21:31:03.146747 sshd[5704]: Accepted publickey for core from 10.0.0.1 port 47776 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:31:03.148119 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:03.152115 systemd-logind[1437]: New session 22 of user core. Jan 13 21:31:03.159362 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:31:04.791365 sshd[5704]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:04.803176 systemd[1]: sshd@21-10.0.0.148:22-10.0.0.1:47776.service: Deactivated successfully. Jan 13 21:31:04.810990 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:31:04.813745 systemd-logind[1437]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:31:04.819368 systemd[1]: Started sshd@22-10.0.0.148:22-10.0.0.1:47788.service - OpenSSH per-connection server daemon (10.0.0.1:47788). Jan 13 21:31:04.820366 systemd-logind[1437]: Removed session 22. Jan 13 21:31:04.854983 sshd[5724]: Accepted publickey for core from 10.0.0.1 port 47788 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:31:04.856516 sshd[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:04.860256 systemd-logind[1437]: New session 23 of user core. Jan 13 21:31:04.868414 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:31:05.073977 sshd[5724]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:05.084069 systemd[1]: sshd@22-10.0.0.148:22-10.0.0.1:47788.service: Deactivated successfully. Jan 13 21:31:05.085793 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:31:05.087056 systemd-logind[1437]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:31:05.088277 systemd[1]: Started sshd@23-10.0.0.148:22-10.0.0.1:47790.service - OpenSSH per-connection server daemon (10.0.0.1:47790). Jan 13 21:31:05.088946 systemd-logind[1437]: Removed session 23. Jan 13 21:31:05.138718 sshd[5736]: Accepted publickey for core from 10.0.0.1 port 47790 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:31:05.140222 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:05.143815 systemd-logind[1437]: New session 24 of user core. Jan 13 21:31:05.155309 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:31:05.260040 sshd[5736]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:05.263462 systemd[1]: sshd@23-10.0.0.148:22-10.0.0.1:47790.service: Deactivated successfully. Jan 13 21:31:05.265302 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:31:05.266002 systemd-logind[1437]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:31:05.267121 systemd-logind[1437]: Removed session 24. Jan 13 21:31:09.556105 kubelet[2501]: E0113 21:31:09.556062 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:31:10.271821 systemd[1]: Started sshd@24-10.0.0.148:22-10.0.0.1:54340.service - OpenSSH per-connection server daemon (10.0.0.1:54340). Jan 13 21:31:10.307776 sshd[5770]: Accepted publickey for core from 10.0.0.1 port 54340 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:31:10.309290 sshd[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:10.313150 systemd-logind[1437]: New session 25 of user core. Jan 13 21:31:10.324314 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:31:10.441150 sshd[5770]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:10.445238 systemd[1]: sshd@24-10.0.0.148:22-10.0.0.1:54340.service: Deactivated successfully. Jan 13 21:31:10.447068 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:31:10.447720 systemd-logind[1437]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:31:10.448693 systemd-logind[1437]: Removed session 25. Jan 13 21:31:15.457222 systemd[1]: Started sshd@25-10.0.0.148:22-10.0.0.1:54350.service - OpenSSH per-connection server daemon (10.0.0.1:54350). Jan 13 21:31:15.497033 sshd[5794]: Accepted publickey for core from 10.0.0.1 port 54350 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:31:15.498762 sshd[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:15.502616 systemd-logind[1437]: New session 26 of user core. Jan 13 21:31:15.509317 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:31:15.623611 sshd[5794]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:15.627740 systemd[1]: sshd@25-10.0.0.148:22-10.0.0.1:54350.service: Deactivated successfully. Jan 13 21:31:15.629908 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:31:15.630703 systemd-logind[1437]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:31:15.631694 systemd-logind[1437]: Removed session 26. Jan 13 21:31:20.556691 kubelet[2501]: E0113 21:31:20.556653 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:31:20.636443 systemd[1]: Started sshd@26-10.0.0.148:22-10.0.0.1:49642.service - OpenSSH per-connection server daemon (10.0.0.1:49642). Jan 13 21:31:20.673867 sshd[5811]: Accepted publickey for core from 10.0.0.1 port 49642 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:31:20.675369 sshd[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:20.679272 systemd-logind[1437]: New session 27 of user core. Jan 13 21:31:20.685316 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:31:20.791055 sshd[5811]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:20.795581 systemd[1]: sshd@26-10.0.0.148:22-10.0.0.1:49642.service: Deactivated successfully. Jan 13 21:31:20.797822 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:31:20.798436 systemd-logind[1437]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:31:20.799369 systemd-logind[1437]: Removed session 27. Jan 13 21:31:22.556928 kubelet[2501]: E0113 21:31:22.556881 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:31:25.802228 systemd[1]: Started sshd@27-10.0.0.148:22-10.0.0.1:49656.service - OpenSSH per-connection server daemon (10.0.0.1:49656). Jan 13 21:31:25.842334 sshd[5848]: Accepted publickey for core from 10.0.0.1 port 49656 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:31:25.843917 sshd[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:25.847634 systemd-logind[1437]: New session 28 of user core. Jan 13 21:31:25.853326 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:31:25.959462 sshd[5848]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:25.963535 systemd[1]: sshd@27-10.0.0.148:22-10.0.0.1:49656.service: Deactivated successfully. Jan 13 21:31:25.965679 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:31:25.966324 systemd-logind[1437]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:31:25.967272 systemd-logind[1437]: Removed session 28.