Jan 13 21:20:56.898548 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:20:56.898568 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:20:56.898579 kernel: BIOS-provided physical RAM map: Jan 13 21:20:56.898585 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:20:56.898591 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:20:56.898597 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:20:56.898604 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 21:20:56.898610 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 21:20:56.898616 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:20:56.898624 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 21:20:56.898631 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:20:56.898637 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:20:56.898643 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:20:56.898649 kernel: NX (Execute Disable) protection: active Jan 13 21:20:56.898657 kernel: APIC: Static calls initialized Jan 13 21:20:56.898665 kernel: SMBIOS 2.8 present. Jan 13 21:20:56.898672 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 21:20:56.898679 kernel: Hypervisor detected: KVM Jan 13 21:20:56.898685 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:20:56.898692 kernel: kvm-clock: using sched offset of 2191894515 cycles Jan 13 21:20:56.898710 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:20:56.898718 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:20:56.898725 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:20:56.898732 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:20:56.898738 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 21:20:56.898748 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:20:56.898755 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:20:56.898762 kernel: Using GB pages for direct mapping Jan 13 21:20:56.898769 kernel: ACPI: Early table checksum verification disabled Jan 13 21:20:56.898776 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 21:20:56.898783 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:56.898790 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:56.898797 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:56.898806 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 21:20:56.898813 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:56.898819 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:56.898826 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:56.898833 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:20:56.898840 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 21:20:56.898847 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 21:20:56.898857 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 21:20:56.898866 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 21:20:56.898873 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 21:20:56.898880 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 21:20:56.898888 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 21:20:56.898895 kernel: No NUMA configuration found Jan 13 21:20:56.898902 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 21:20:56.898909 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 21:20:56.898918 kernel: Zone ranges: Jan 13 21:20:56.898925 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:20:56.898932 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 21:20:56.898939 kernel: Normal empty Jan 13 21:20:56.898946 kernel: Movable zone start for each node Jan 13 21:20:56.898953 kernel: Early memory node ranges Jan 13 21:20:56.898960 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:20:56.898967 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 21:20:56.898974 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 21:20:56.898984 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:20:56.898991 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:20:56.898998 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 21:20:56.899005 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:20:56.899012 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:20:56.899019 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:20:56.899026 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:20:56.899033 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:20:56.899040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:20:56.899049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:20:56.899056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:20:56.899063 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:20:56.899071 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:20:56.899078 kernel: TSC deadline timer available Jan 13 21:20:56.899085 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:20:56.899092 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:20:56.899099 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:20:56.899106 kernel: kvm-guest: setup PV sched yield Jan 13 21:20:56.899115 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 21:20:56.899122 kernel: Booting paravirtualized kernel on KVM Jan 13 21:20:56.899130 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:20:56.899137 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:20:56.899150 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:20:56.899157 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:20:56.899164 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:20:56.899171 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:20:56.899178 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:20:56.899187 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:20:56.899197 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:20:56.899204 kernel: random: crng init done Jan 13 21:20:56.899211 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:20:56.899218 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:20:56.899226 kernel: Fallback order for Node 0: 0 Jan 13 21:20:56.899233 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 21:20:56.899240 kernel: Policy zone: DMA32 Jan 13 21:20:56.899247 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:20:56.899257 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 13 21:20:56.899264 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:20:56.899271 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:20:56.899278 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:20:56.899285 kernel: Dynamic Preempt: voluntary Jan 13 21:20:56.899292 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:20:56.899300 kernel: rcu: RCU event tracing is enabled. Jan 13 21:20:56.899307 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:20:56.899315 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:20:56.899324 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:20:56.899331 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:20:56.899338 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:20:56.899345 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:20:56.899353 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:20:56.899360 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:20:56.899367 kernel: Console: colour VGA+ 80x25 Jan 13 21:20:56.899374 kernel: printk: console [ttyS0] enabled Jan 13 21:20:56.899381 kernel: ACPI: Core revision 20230628 Jan 13 21:20:56.899390 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:20:56.899397 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:20:56.899405 kernel: x2apic enabled Jan 13 21:20:56.899412 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:20:56.899419 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:20:56.899426 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:20:56.899433 kernel: kvm-guest: setup PV IPIs Jan 13 21:20:56.899449 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:20:56.899457 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:20:56.899464 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:20:56.899472 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:20:56.899479 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:20:56.899488 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:20:56.899496 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:20:56.899503 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:20:56.899511 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:20:56.899518 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:20:56.899528 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:20:56.899535 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:20:56.899543 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:20:56.899550 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:20:56.899558 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:20:56.899566 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:20:56.899573 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:20:56.899581 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:20:56.899590 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:20:56.899598 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:20:56.899605 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:20:56.899613 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:20:56.899620 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:20:56.899628 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:20:56.899635 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:20:56.899642 kernel: landlock: Up and running. Jan 13 21:20:56.899650 kernel: SELinux: Initializing. Jan 13 21:20:56.899659 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:20:56.899667 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:20:56.899674 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:20:56.899682 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:20:56.899689 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:20:56.899708 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:20:56.899716 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:20:56.899723 kernel: ... version: 0 Jan 13 21:20:56.899730 kernel: ... bit width: 48 Jan 13 21:20:56.899740 kernel: ... generic registers: 6 Jan 13 21:20:56.899748 kernel: ... value mask: 0000ffffffffffff Jan 13 21:20:56.899755 kernel: ... max period: 00007fffffffffff Jan 13 21:20:56.899763 kernel: ... fixed-purpose events: 0 Jan 13 21:20:56.899770 kernel: ... event mask: 000000000000003f Jan 13 21:20:56.899777 kernel: signal: max sigframe size: 1776 Jan 13 21:20:56.899785 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:20:56.899792 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:20:56.899800 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:20:56.899809 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:20:56.899817 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:20:56.899824 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:20:56.899831 kernel: smpboot: Max logical packages: 1 Jan 13 21:20:56.899839 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:20:56.899846 kernel: devtmpfs: initialized Jan 13 21:20:56.899854 kernel: x86/mm: Memory block size: 128MB Jan 13 21:20:56.899861 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:20:56.899869 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:20:56.899878 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:20:56.899886 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:20:56.899893 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:20:56.899901 kernel: audit: type=2000 audit(1736803256.683:1): state=initialized audit_enabled=0 res=1 Jan 13 21:20:56.899908 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:20:56.899916 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:20:56.899923 kernel: cpuidle: using governor menu Jan 13 21:20:56.899930 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:20:56.899938 kernel: dca service started, version 1.12.1 Jan 13 21:20:56.899947 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:20:56.899955 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:20:56.899962 kernel: PCI: Using configuration type 1 for base access Jan 13 21:20:56.899970 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:20:56.899977 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:20:56.899985 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:20:56.899992 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:20:56.900000 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:20:56.900007 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:20:56.900017 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:20:56.900024 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:20:56.900031 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:20:56.900039 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:20:56.900046 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:20:56.900054 kernel: ACPI: Interpreter enabled Jan 13 21:20:56.900061 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:20:56.900068 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:20:56.900076 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:20:56.900085 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:20:56.900093 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:20:56.900100 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:20:56.900277 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:20:56.900405 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:20:56.900525 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:20:56.900535 kernel: PCI host bridge to bus 0000:00 Jan 13 21:20:56.900663 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:20:56.900811 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:20:56.900923 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:20:56.901032 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:20:56.901148 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:20:56.901265 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 21:20:56.901376 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:20:56.901517 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:20:56.901647 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:20:56.901821 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 21:20:56.901942 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 21:20:56.902060 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 21:20:56.902191 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:20:56.902323 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:20:56.902448 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 21:20:56.902568 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 21:20:56.902688 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 21:20:56.902837 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:20:56.902958 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:20:56.903078 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 21:20:56.903211 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 21:20:56.903342 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:20:56.903463 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 21:20:56.903582 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 21:20:56.903713 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 21:20:56.903837 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 21:20:56.903964 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:20:56.904090 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:20:56.904226 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:20:56.904345 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 21:20:56.904469 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 21:20:56.904599 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:20:56.904734 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 21:20:56.904748 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:20:56.904763 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:20:56.904770 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:20:56.904778 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:20:56.904786 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:20:56.904793 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:20:56.904801 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:20:56.904808 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:20:56.904815 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:20:56.904823 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:20:56.904833 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:20:56.904840 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:20:56.904848 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:20:56.904855 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:20:56.904862 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:20:56.904870 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:20:56.904877 kernel: iommu: Default domain type: Translated Jan 13 21:20:56.904885 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:20:56.904892 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:20:56.904902 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:20:56.904909 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:20:56.904916 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 21:20:56.905039 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:20:56.905169 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:20:56.905289 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:20:56.905299 kernel: vgaarb: loaded Jan 13 21:20:56.905307 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:20:56.905318 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:20:56.905326 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:20:56.905333 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:20:56.905341 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:20:56.905349 kernel: pnp: PnP ACPI init Jan 13 21:20:56.905476 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:20:56.905487 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:20:56.905495 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:20:56.905507 kernel: NET: Registered PF_INET protocol family Jan 13 21:20:56.905514 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:20:56.905522 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:20:56.905530 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:20:56.905538 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:20:56.905545 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:20:56.905553 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:20:56.905561 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:20:56.905568 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:20:56.905578 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:20:56.905586 kernel: NET: Registered PF_XDP protocol family Jan 13 21:20:56.905709 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:20:56.905823 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:20:56.905932 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:20:56.906042 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:20:56.906160 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:20:56.906271 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 21:20:56.906285 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:20:56.906293 kernel: Initialise system trusted keyrings Jan 13 21:20:56.906301 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:20:56.906309 kernel: Key type asymmetric registered Jan 13 21:20:56.906316 kernel: Asymmetric key parser 'x509' registered Jan 13 21:20:56.906324 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:20:56.906332 kernel: io scheduler mq-deadline registered Jan 13 21:20:56.906339 kernel: io scheduler kyber registered Jan 13 21:20:56.906347 kernel: io scheduler bfq registered Jan 13 21:20:56.906356 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:20:56.906365 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:20:56.906372 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:20:56.906380 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:20:56.906388 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:20:56.906395 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:20:56.906403 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:20:56.906411 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:20:56.906419 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:20:56.906548 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:20:56.906559 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:20:56.906670 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:20:56.906837 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:20:56 UTC (1736803256) Jan 13 21:20:56.906950 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:20:56.906960 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:20:56.906967 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:20:56.906975 kernel: Segment Routing with IPv6 Jan 13 21:20:56.906986 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:20:56.906994 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:20:56.907001 kernel: Key type dns_resolver registered Jan 13 21:20:56.907009 kernel: IPI shorthand broadcast: enabled Jan 13 21:20:56.907016 kernel: sched_clock: Marking stable (573002367, 104277931)->(722057154, -44776856) Jan 13 21:20:56.907024 kernel: registered taskstats version 1 Jan 13 21:20:56.907031 kernel: Loading compiled-in X.509 certificates Jan 13 21:20:56.907039 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:20:56.907046 kernel: Key type .fscrypt registered Jan 13 21:20:56.907056 kernel: Key type fscrypt-provisioning registered Jan 13 21:20:56.907063 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:20:56.907071 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:20:56.907078 kernel: ima: No architecture policies found Jan 13 21:20:56.907085 kernel: clk: Disabling unused clocks Jan 13 21:20:56.907093 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:20:56.907100 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:20:56.907108 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:20:56.907115 kernel: Run /init as init process Jan 13 21:20:56.907125 kernel: with arguments: Jan 13 21:20:56.907132 kernel: /init Jan 13 21:20:56.907139 kernel: with environment: Jan 13 21:20:56.907154 kernel: HOME=/ Jan 13 21:20:56.907161 kernel: TERM=linux Jan 13 21:20:56.907168 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:20:56.907187 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:20:56.907204 systemd[1]: Detected virtualization kvm. Jan 13 21:20:56.907215 systemd[1]: Detected architecture x86-64. Jan 13 21:20:56.907223 systemd[1]: Running in initrd. Jan 13 21:20:56.907231 systemd[1]: No hostname configured, using default hostname. Jan 13 21:20:56.907238 systemd[1]: Hostname set to . Jan 13 21:20:56.907247 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:20:56.907255 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:20:56.907263 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:20:56.907271 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:20:56.907282 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:20:56.907301 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:20:56.907312 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:20:56.907320 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:20:56.907330 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:20:56.907341 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:20:56.907349 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:20:56.907357 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:20:56.907365 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:20:56.907373 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:20:56.907382 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:20:56.907390 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:20:56.907398 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:20:56.907409 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:20:56.907417 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:20:56.907425 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:20:56.907433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:20:56.907442 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:20:56.907450 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:20:56.907458 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:20:56.907467 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:20:56.907475 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:20:56.907485 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:20:56.907496 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:20:56.907504 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:20:56.907512 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:20:56.907520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:20:56.907528 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:20:56.907537 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:20:56.907545 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:20:56.907573 systemd-journald[192]: Collecting audit messages is disabled. Jan 13 21:20:56.907594 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:20:56.907605 systemd-journald[192]: Journal started Jan 13 21:20:56.907625 systemd-journald[192]: Runtime Journal (/run/log/journal/042e0d2c24624761a15af64f1a704cb4) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:20:56.893493 systemd-modules-load[194]: Inserted module 'overlay' Jan 13 21:20:56.932329 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:20:56.932345 kernel: Bridge firewalling registered Jan 13 21:20:56.932355 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:20:56.921230 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 13 21:20:56.932551 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:20:56.934726 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:20:56.937006 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:20:56.951894 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:20:56.953096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:20:56.953907 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:20:56.957899 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:20:56.966758 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:20:56.969194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:20:56.972197 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:20:56.975024 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:20:56.993890 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:20:56.996169 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:20:57.004868 dracut-cmdline[227]: dracut-dracut-053 Jan 13 21:20:57.007687 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:20:57.028785 systemd-resolved[229]: Positive Trust Anchors: Jan 13 21:20:57.028801 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:20:57.028833 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:20:57.031325 systemd-resolved[229]: Defaulting to hostname 'linux'. Jan 13 21:20:57.032346 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:20:57.038400 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:20:57.090728 kernel: SCSI subsystem initialized Jan 13 21:20:57.099722 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:20:57.109721 kernel: iscsi: registered transport (tcp) Jan 13 21:20:57.130897 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:20:57.130943 kernel: QLogic iSCSI HBA Driver Jan 13 21:20:57.183121 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:20:57.194819 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:20:57.220214 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:20:57.220246 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:20:57.221261 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:20:57.263726 kernel: raid6: avx2x4 gen() 30111 MB/s Jan 13 21:20:57.280719 kernel: raid6: avx2x2 gen() 31303 MB/s Jan 13 21:20:57.297787 kernel: raid6: avx2x1 gen() 26116 MB/s Jan 13 21:20:57.297803 kernel: raid6: using algorithm avx2x2 gen() 31303 MB/s Jan 13 21:20:57.315795 kernel: raid6: .... xor() 19987 MB/s, rmw enabled Jan 13 21:20:57.315810 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:20:57.335722 kernel: xor: automatically using best checksumming function avx Jan 13 21:20:57.488724 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:20:57.502348 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:20:57.509927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:20:57.522411 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 13 21:20:57.526931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:20:57.533850 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:20:57.546937 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jan 13 21:20:57.578253 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:20:57.589848 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:20:57.651931 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:20:57.667879 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:20:57.679887 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:20:57.683671 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:20:57.685859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:20:57.693550 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:20:57.705080 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:20:57.705239 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:20:57.705252 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:20:57.705263 kernel: GPT:9289727 != 19775487 Jan 13 21:20:57.705273 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:20:57.705283 kernel: GPT:9289727 != 19775487 Jan 13 21:20:57.705293 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:20:57.705309 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:20:57.688249 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:20:57.696839 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:20:57.715716 kernel: libata version 3.00 loaded. Jan 13 21:20:57.716591 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:20:57.726070 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:20:57.726200 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:20:57.730853 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:20:57.751751 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:20:57.751768 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:20:57.751779 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:20:57.751930 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:20:57.752075 kernel: AES CTR mode by8 optimization enabled Jan 13 21:20:57.752086 kernel: scsi host0: ahci Jan 13 21:20:57.752252 kernel: scsi host1: ahci Jan 13 21:20:57.752408 kernel: scsi host2: ahci Jan 13 21:20:57.752554 kernel: scsi host3: ahci Jan 13 21:20:57.752711 kernel: scsi host4: ahci Jan 13 21:20:57.752858 kernel: scsi host5: ahci Jan 13 21:20:57.752996 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 21:20:57.753012 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 21:20:57.753022 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 21:20:57.753033 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 21:20:57.753043 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 21:20:57.753053 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 21:20:57.753063 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (471) Jan 13 21:20:57.729522 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:20:57.731227 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:20:57.731431 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:20:57.734883 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:20:57.745056 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:20:57.757492 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) Jan 13 21:20:57.772309 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:20:57.805697 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:20:57.812887 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:20:57.827595 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:20:57.830821 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:20:57.837769 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:20:57.846842 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:20:57.847951 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:20:57.856800 disk-uuid[552]: Primary Header is updated. Jan 13 21:20:57.856800 disk-uuid[552]: Secondary Entries is updated. Jan 13 21:20:57.856800 disk-uuid[552]: Secondary Header is updated. Jan 13 21:20:57.860730 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:20:57.865731 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:20:57.869512 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:20:58.069805 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:20:58.069882 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:20:58.069894 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:20:58.070716 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:20:58.071731 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:20:58.072723 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:20:58.072739 kernel: ata3.00: applying bridge limits Jan 13 21:20:58.073736 kernel: ata3.00: configured for UDMA/100 Jan 13 21:20:58.074773 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:20:58.074803 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:20:58.120220 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:20:58.132352 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:20:58.132370 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:20:58.866834 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:20:58.867556 disk-uuid[554]: The operation has completed successfully. Jan 13 21:20:58.895065 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:20:58.895202 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:20:58.912896 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:20:58.916240 sh[589]: Success Jan 13 21:20:58.928723 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:20:58.959683 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:20:58.968213 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:20:58.970657 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:20:58.981774 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:20:58.981803 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:20:58.981815 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:20:58.982787 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:20:58.984123 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:20:58.988164 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:20:58.989674 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:20:58.997822 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:20:58.999405 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:20:59.008172 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:20:59.008204 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:20:59.008218 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:20:59.010723 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:20:59.019762 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:20:59.021559 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:20:59.093594 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:20:59.105841 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:20:59.109299 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:20:59.111313 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:20:59.130106 systemd-networkd[767]: lo: Link UP Jan 13 21:20:59.130117 systemd-networkd[767]: lo: Gained carrier Jan 13 21:20:59.132006 systemd-networkd[767]: Enumeration completed Jan 13 21:20:59.132382 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:20:59.132488 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:20:59.132492 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:20:59.134133 systemd[1]: Reached target network.target - Network. Jan 13 21:20:59.134233 systemd-networkd[767]: eth0: Link UP Jan 13 21:20:59.134237 systemd-networkd[767]: eth0: Gained carrier Jan 13 21:20:59.134245 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:20:59.151781 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.66/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:20:59.169856 ignition[770]: Ignition 2.19.0 Jan 13 21:20:59.169868 ignition[770]: Stage: fetch-offline Jan 13 21:20:59.169918 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:20:59.169930 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:20:59.170033 ignition[770]: parsed url from cmdline: "" Jan 13 21:20:59.170037 ignition[770]: no config URL provided Jan 13 21:20:59.170047 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:20:59.170058 ignition[770]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:20:59.170103 ignition[770]: op(1): [started] loading QEMU firmware config module Jan 13 21:20:59.170110 ignition[770]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:20:59.178181 ignition[770]: op(1): [finished] loading QEMU firmware config module Jan 13 21:20:59.178203 ignition[770]: QEMU firmware config was not found. Ignoring... Jan 13 21:20:59.220827 ignition[770]: parsing config with SHA512: 356ec0df8528e5678a818c37bc37297c96534f3d0dcad63d24f7288e151d50f1e3b526860555508458cc44c3cffbf6bee895e7298a6505c35955d9a53e16a47a Jan 13 21:20:59.224101 unknown[770]: fetched base config from "system" Jan 13 21:20:59.224116 unknown[770]: fetched user config from "qemu" Jan 13 21:20:59.224555 ignition[770]: fetch-offline: fetch-offline passed Jan 13 21:20:59.224635 ignition[770]: Ignition finished successfully Jan 13 21:20:59.227017 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:20:59.228871 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:20:59.237875 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:20:59.249458 ignition[783]: Ignition 2.19.0 Jan 13 21:20:59.249468 ignition[783]: Stage: kargs Jan 13 21:20:59.249615 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:20:59.249625 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:20:59.250552 ignition[783]: kargs: kargs passed Jan 13 21:20:59.250600 ignition[783]: Ignition finished successfully Jan 13 21:20:59.256877 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:20:59.264828 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:20:59.275929 ignition[792]: Ignition 2.19.0 Jan 13 21:20:59.275940 ignition[792]: Stage: disks Jan 13 21:20:59.276117 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:20:59.276128 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:20:59.278986 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:20:59.277010 ignition[792]: disks: disks passed Jan 13 21:20:59.280617 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:20:59.277053 ignition[792]: Ignition finished successfully Jan 13 21:20:59.282497 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:20:59.283752 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:20:59.285285 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:20:59.286312 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:20:59.299856 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:20:59.310659 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:20:59.317256 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:20:59.333780 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:20:59.441723 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:20:59.441864 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:20:59.442876 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:20:59.458788 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:20:59.460374 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:20:59.461645 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:20:59.469651 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Jan 13 21:20:59.469669 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:20:59.469680 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:20:59.469691 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:20:59.461679 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:20:59.474736 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:20:59.461713 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:20:59.470812 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:20:59.476109 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:20:59.478963 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:20:59.514494 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:20:59.518365 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:20:59.522323 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:20:59.525769 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:20:59.605752 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:20:59.616858 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:20:59.618400 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:20:59.625726 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:20:59.642152 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:20:59.649330 ignition[927]: INFO : Ignition 2.19.0 Jan 13 21:20:59.649330 ignition[927]: INFO : Stage: mount Jan 13 21:20:59.650966 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:20:59.650966 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:20:59.650966 ignition[927]: INFO : mount: mount passed Jan 13 21:20:59.650966 ignition[927]: INFO : Ignition finished successfully Jan 13 21:20:59.652330 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:20:59.662778 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:20:59.981214 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:20:59.991855 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:20:59.998723 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Jan 13 21:21:00.000840 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:00.000862 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:00.000873 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:21:00.003722 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:21:00.005189 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:21:00.028105 ignition[956]: INFO : Ignition 2.19.0 Jan 13 21:21:00.028105 ignition[956]: INFO : Stage: files Jan 13 21:21:00.029777 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:00.029777 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:21:00.032368 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:21:00.033636 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:21:00.033636 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:21:00.037263 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:21:00.038633 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:21:00.040253 unknown[956]: wrote ssh authorized keys file for user: core Jan 13 21:21:00.041304 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:21:00.043372 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:21:00.045216 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:21:00.206510 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:21:00.334856 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:21:00.334856 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:21:00.338812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:21:00.338812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:21:00.338812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:21:00.338812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:21:00.338812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:21:00.338812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:21:00.338812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:21:00.338812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:21:00.338812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:21:00.338812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:21:00.338812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:21:00.338812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:21:00.338812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:21:00.693858 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:21:01.128513 systemd-networkd[767]: eth0: Gained IPv6LL Jan 13 21:21:01.144505 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:21:01.144505 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:21:01.144505 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:21:01.150132 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:21:01.150132 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:21:01.150132 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 21:21:01.150132 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:21:01.150132 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:21:01.150132 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 21:21:01.150132 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:21:01.169313 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:21:01.174028 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:21:01.175581 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:21:01.175581 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:21:01.175581 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:21:01.175581 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:21:01.175581 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:21:01.175581 ignition[956]: INFO : files: files passed Jan 13 21:21:01.175581 ignition[956]: INFO : Ignition finished successfully Jan 13 21:21:01.177141 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:21:01.190832 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:21:01.193463 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:21:01.195316 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:21:01.195422 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:21:01.203235 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:21:01.206095 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:21:01.206095 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:21:01.209142 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:21:01.212299 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:21:01.213745 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:21:01.226834 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:21:01.249549 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:21:01.249675 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:21:01.251915 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:21:01.253953 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:21:01.255947 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:21:01.265816 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:21:01.280584 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:21:01.289860 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:21:01.299374 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:21:01.300021 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:21:01.300379 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:21:01.334820 ignition[1011]: INFO : Ignition 2.19.0 Jan 13 21:21:01.334820 ignition[1011]: INFO : Stage: umount Jan 13 21:21:01.334820 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:01.334820 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:21:01.334820 ignition[1011]: INFO : umount: umount passed Jan 13 21:21:01.334820 ignition[1011]: INFO : Ignition finished successfully Jan 13 21:21:01.300693 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:21:01.300810 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:21:01.301364 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:21:01.301711 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:21:01.302037 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:21:01.302366 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:21:01.302695 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:21:01.303036 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:21:01.303363 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:21:01.303713 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:21:01.304034 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:21:01.304358 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:21:01.304501 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:21:01.304607 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:21:01.305188 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:21:01.305523 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:21:01.305982 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:21:01.306092 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:21:01.306328 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:21:01.306432 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:21:01.307155 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:21:01.307263 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:21:01.307747 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:21:01.307975 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:21:01.311761 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:21:01.312188 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:21:01.312490 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:21:01.312833 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:21:01.312924 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:21:01.313344 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:21:01.313432 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:21:01.314013 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:21:01.314130 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:21:01.314511 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:21:01.314612 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:21:01.315635 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:21:01.316490 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:21:01.316914 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:21:01.317012 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:21:01.317323 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:21:01.317414 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:21:01.321080 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:21:01.321178 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:21:01.336325 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:21:01.336453 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:21:01.337566 systemd[1]: Stopped target network.target - Network. Jan 13 21:21:01.339329 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:21:01.339382 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:21:01.341434 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:21:01.341481 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:21:01.343339 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:21:01.343383 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:21:01.345168 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:21:01.345214 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:21:01.347464 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:21:01.349748 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:21:01.352781 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:21:01.353733 systemd-networkd[767]: eth0: DHCPv6 lease lost Jan 13 21:21:01.355866 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:21:01.355987 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:21:01.357467 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:21:01.357507 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:21:01.365773 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:21:01.367128 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:21:01.367178 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:21:01.369667 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:21:01.371998 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:21:01.372118 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:21:01.385082 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:21:01.385169 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:21:01.386979 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:21:01.387037 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:21:01.388935 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:21:01.388983 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:21:01.391394 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:21:01.391513 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:21:01.401480 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:21:01.401660 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:21:01.403815 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:21:01.403865 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:21:01.405489 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:21:01.405528 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:21:01.407666 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:21:01.407728 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:21:01.409868 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:21:01.409916 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:21:01.411782 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:21:01.411829 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:01.421836 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:21:01.423229 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:21:01.423284 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:21:01.425526 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:21:01.425573 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:21:01.427639 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:21:01.427687 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:21:01.429682 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:21:01.429739 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:01.432018 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:21:01.432131 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:21:01.546781 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:21:01.546934 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:21:01.548056 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:21:01.551369 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:21:01.551456 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:21:01.561828 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:21:01.568764 systemd[1]: Switching root. Jan 13 21:21:01.603067 systemd-journald[192]: Journal stopped Jan 13 21:21:03.093296 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 13 21:21:03.093371 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:21:03.093385 kernel: SELinux: policy capability open_perms=1 Jan 13 21:21:03.093397 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:21:03.093411 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:21:03.093423 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:21:03.093435 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:21:03.093446 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:21:03.093464 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:21:03.093475 kernel: audit: type=1403 audit(1736803262.286:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:21:03.093488 systemd[1]: Successfully loaded SELinux policy in 37.705ms. Jan 13 21:21:03.093513 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.227ms. Jan 13 21:21:03.093526 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:21:03.093541 systemd[1]: Detected virtualization kvm. Jan 13 21:21:03.093553 systemd[1]: Detected architecture x86-64. Jan 13 21:21:03.093564 systemd[1]: Detected first boot. Jan 13 21:21:03.093576 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:21:03.093588 zram_generator::config[1055]: No configuration found. Jan 13 21:21:03.093606 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:21:03.093619 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:21:03.093636 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:21:03.093651 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:21:03.093664 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:21:03.093675 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:21:03.093688 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:21:03.093711 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:21:03.093725 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:21:03.093737 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:21:03.093749 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:21:03.093761 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:21:03.093775 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:21:03.093787 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:21:03.093800 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:21:03.093811 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:21:03.093824 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:21:03.093836 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:21:03.093848 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:21:03.093859 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:21:03.093871 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:21:03.093885 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:21:03.093898 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:21:03.093910 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:21:03.093921 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:21:03.093938 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:21:03.093950 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:21:03.093962 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:21:03.093976 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:21:03.093995 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:21:03.094007 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:21:03.094018 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:21:03.094030 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:21:03.094042 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:21:03.094054 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:21:03.094066 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:21:03.094078 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:21:03.094093 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:03.094105 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:21:03.094117 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:21:03.094129 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:21:03.094142 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:21:03.094154 systemd[1]: Reached target machines.target - Containers. Jan 13 21:21:03.094165 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:21:03.094177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:21:03.094189 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:21:03.094208 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:21:03.094220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:21:03.094232 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:21:03.094244 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:21:03.094256 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:21:03.094268 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:21:03.094281 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:21:03.094293 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:21:03.094308 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:21:03.094319 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:21:03.094331 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:21:03.094343 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:21:03.094355 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:21:03.094367 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:21:03.094379 kernel: fuse: init (API version 7.39) Jan 13 21:21:03.094390 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:21:03.094419 systemd-journald[1118]: Collecting audit messages is disabled. Jan 13 21:21:03.094442 systemd-journald[1118]: Journal started Jan 13 21:21:03.094464 systemd-journald[1118]: Runtime Journal (/run/log/journal/042e0d2c24624761a15af64f1a704cb4) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:21:02.863412 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:21:02.881355 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:21:02.881799 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:21:03.097224 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:21:03.098741 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:21:03.098770 systemd[1]: Stopped verity-setup.service. Jan 13 21:21:03.099882 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:03.101719 kernel: loop: module loaded Jan 13 21:21:03.105168 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:21:03.106007 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:21:03.107175 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:21:03.108396 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:21:03.110925 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:21:03.112132 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:21:03.113474 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:21:03.114825 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:21:03.118515 kernel: ACPI: bus type drm_connector registered Jan 13 21:21:03.116660 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:21:03.117000 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:21:03.119272 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:21:03.119444 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:21:03.120976 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:21:03.121162 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:21:03.122610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:21:03.122822 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:21:03.124392 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:21:03.124575 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:21:03.126150 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:21:03.126385 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:21:03.127996 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:21:03.129763 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:21:03.131746 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:21:03.149845 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:21:03.158829 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:21:03.161511 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:21:03.162821 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:21:03.162863 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:21:03.165352 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:21:03.168051 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:21:03.170633 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:21:03.172296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:21:03.173902 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:21:03.179616 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:21:03.181279 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:21:03.185447 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:21:03.189039 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:21:03.190251 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:21:03.196652 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:21:03.199344 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:21:03.203766 systemd-journald[1118]: Time spent on flushing to /var/log/journal/042e0d2c24624761a15af64f1a704cb4 is 14.350ms for 952 entries. Jan 13 21:21:03.203766 systemd-journald[1118]: System Journal (/var/log/journal/042e0d2c24624761a15af64f1a704cb4) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:21:03.234007 systemd-journald[1118]: Received client request to flush runtime journal. Jan 13 21:21:03.234055 kernel: loop0: detected capacity change from 0 to 140768 Jan 13 21:21:03.203918 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:21:03.207092 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:21:03.209710 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:21:03.211434 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:21:03.219749 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:21:03.221685 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:21:03.231393 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:21:03.236546 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:21:03.243892 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:21:03.249034 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:21:03.250838 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:21:03.253237 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:21:03.261727 kernel: loop1: detected capacity change from 0 to 142488 Jan 13 21:21:03.262498 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 13 21:21:03.262517 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 13 21:21:03.271098 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:21:03.280924 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:21:03.283529 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:21:03.284503 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:21:03.290810 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:21:03.312170 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:21:03.320728 kernel: loop2: detected capacity change from 0 to 210664 Jan 13 21:21:03.320476 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:21:03.343590 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 13 21:21:03.343611 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 13 21:21:03.350023 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:21:03.364762 kernel: loop3: detected capacity change from 0 to 140768 Jan 13 21:21:03.375756 kernel: loop4: detected capacity change from 0 to 142488 Jan 13 21:21:03.384724 kernel: loop5: detected capacity change from 0 to 210664 Jan 13 21:21:03.388746 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:21:03.389331 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 13 21:21:03.393085 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:21:03.393101 systemd[1]: Reloading... Jan 13 21:21:03.448727 zram_generator::config[1223]: No configuration found. Jan 13 21:21:03.516167 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:21:03.564301 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:21:03.612884 systemd[1]: Reloading finished in 219 ms. Jan 13 21:21:03.646532 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:21:03.648410 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:21:03.663910 systemd[1]: Starting ensure-sysext.service... Jan 13 21:21:03.666283 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:21:03.675235 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:21:03.675249 systemd[1]: Reloading... Jan 13 21:21:03.690257 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:21:03.690607 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:21:03.692075 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:21:03.692430 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 21:21:03.692530 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 21:21:03.698395 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:21:03.698470 systemd-tmpfiles[1261]: Skipping /boot Jan 13 21:21:03.709328 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:21:03.709461 systemd-tmpfiles[1261]: Skipping /boot Jan 13 21:21:03.746991 zram_generator::config[1297]: No configuration found. Jan 13 21:21:03.888289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:21:03.946167 systemd[1]: Reloading finished in 270 ms. Jan 13 21:21:03.965933 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:21:03.989169 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:21:03.996247 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:21:03.998966 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:21:04.001207 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:21:04.005989 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:21:04.010901 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:21:04.014743 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:21:04.020982 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:21:04.023722 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:04.023883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:21:04.025906 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:21:04.028932 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:21:04.032242 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:21:04.033433 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:21:04.033523 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:04.036172 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:21:04.041318 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:04.041613 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:21:04.043398 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:21:04.049512 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:21:04.050772 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:04.051130 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jan 13 21:21:04.052694 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:21:04.055435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:21:04.055770 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:21:04.057427 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:21:04.057608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:21:04.062156 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:21:04.063199 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:21:04.071635 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:21:04.073366 augenrules[1358]: No rules Jan 13 21:21:04.073546 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:21:04.084892 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:21:04.087302 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:21:04.091851 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:04.092687 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:21:04.099973 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:21:04.104592 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:21:04.106928 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:21:04.111857 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:21:04.113120 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:21:04.120878 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:21:04.121935 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:04.122578 systemd[1]: Finished ensure-sysext.service. Jan 13 21:21:04.124221 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:21:04.126065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:21:04.126284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:21:04.127892 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:21:04.128074 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:21:04.129588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:21:04.129776 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:21:04.131349 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:21:04.131570 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:21:04.137724 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1381) Jan 13 21:21:04.148240 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:21:04.155442 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:21:04.155518 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:21:04.159246 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:21:04.160474 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:21:04.163006 systemd-resolved[1331]: Positive Trust Anchors: Jan 13 21:21:04.163022 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:21:04.163054 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:21:04.170264 systemd-resolved[1331]: Defaulting to hostname 'linux'. Jan 13 21:21:04.172030 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:21:04.173870 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:21:04.210098 systemd-networkd[1390]: lo: Link UP Jan 13 21:21:04.210413 systemd-networkd[1390]: lo: Gained carrier Jan 13 21:21:04.212269 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:21:04.212812 systemd-networkd[1390]: Enumeration completed Jan 13 21:21:04.213487 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:04.213541 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:21:04.213653 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:21:04.215125 systemd[1]: Reached target network.target - Network. Jan 13 21:21:04.215523 systemd-networkd[1390]: eth0: Link UP Jan 13 21:21:04.215568 systemd-networkd[1390]: eth0: Gained carrier Jan 13 21:21:04.215622 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:04.223922 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:21:04.227561 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:21:04.227777 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.66/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:21:04.232054 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:21:04.232406 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:21:05.017734 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:21:05.017779 systemd-timesyncd[1401]: Initial clock synchronization to Mon 2025-01-13 21:21:05.017651 UTC. Jan 13 21:21:05.017819 systemd-resolved[1331]: Clock change detected. Flushing caches. Jan 13 21:21:05.019708 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:21:05.023967 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:21:05.028387 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:21:05.035896 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 21:21:05.037892 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:21:05.043731 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:21:05.045983 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:21:05.081038 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:05.140926 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:21:05.154163 kernel: kvm_amd: TSC scaling supported Jan 13 21:21:05.154208 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:21:05.154221 kernel: kvm_amd: Nested Paging enabled Jan 13 21:21:05.154233 kernel: kvm_amd: LBR virtualization supported Jan 13 21:21:05.155535 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:21:05.155555 kernel: kvm_amd: Virtual GIF supported Jan 13 21:21:05.179899 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:21:05.211240 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:21:05.217310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:05.230127 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:21:05.237569 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:21:05.268797 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:21:05.270307 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:21:05.271435 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:21:05.272606 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:21:05.273908 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:21:05.275407 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:21:05.276656 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:21:05.278044 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:21:05.279299 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:21:05.279325 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:21:05.280247 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:21:05.281805 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:21:05.284536 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:21:05.293370 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:21:05.295916 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:21:05.297511 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:21:05.298680 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:21:05.299707 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:21:05.300674 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:21:05.300703 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:21:05.301630 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:21:05.303650 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:21:05.306946 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:21:05.309586 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:21:05.311508 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:21:05.314011 jq[1434]: false Jan 13 21:21:05.314007 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:21:05.317023 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:21:05.317031 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:21:05.319113 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:21:05.324980 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:21:05.329135 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:21:05.330607 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:21:05.331093 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:21:05.334031 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:21:05.336618 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:21:05.337494 dbus-daemon[1433]: [system] SELinux support is enabled Jan 13 21:21:05.338575 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:21:05.340235 extend-filesystems[1435]: Found loop3 Jan 13 21:21:05.340804 extend-filesystems[1435]: Found loop4 Jan 13 21:21:05.340804 extend-filesystems[1435]: Found loop5 Jan 13 21:21:05.340804 extend-filesystems[1435]: Found sr0 Jan 13 21:21:05.340804 extend-filesystems[1435]: Found vda Jan 13 21:21:05.340804 extend-filesystems[1435]: Found vda1 Jan 13 21:21:05.340804 extend-filesystems[1435]: Found vda2 Jan 13 21:21:05.340804 extend-filesystems[1435]: Found vda3 Jan 13 21:21:05.340804 extend-filesystems[1435]: Found usr Jan 13 21:21:05.340804 extend-filesystems[1435]: Found vda4 Jan 13 21:21:05.340804 extend-filesystems[1435]: Found vda6 Jan 13 21:21:05.340804 extend-filesystems[1435]: Found vda7 Jan 13 21:21:05.340804 extend-filesystems[1435]: Found vda9 Jan 13 21:21:05.340804 extend-filesystems[1435]: Checking size of /dev/vda9 Jan 13 21:21:05.346092 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:21:05.360332 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:21:05.361346 update_engine[1447]: I20250113 21:21:05.358318 1447 main.cc:92] Flatcar Update Engine starting Jan 13 21:21:05.361346 update_engine[1447]: I20250113 21:21:05.359519 1447 update_check_scheduler.cc:74] Next update check in 2m21s Jan 13 21:21:05.360550 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:21:05.361655 jq[1448]: true Jan 13 21:21:05.360906 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:21:05.361108 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:21:05.363684 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:21:05.363922 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:21:05.366440 extend-filesystems[1435]: Resized partition /dev/vda9 Jan 13 21:21:05.373979 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1381) Jan 13 21:21:05.379506 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:21:05.382711 jq[1457]: true Jan 13 21:21:05.391613 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:21:05.393268 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:21:05.399559 tar[1456]: linux-amd64/helm Jan 13 21:21:05.411473 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:21:05.411501 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:21:05.412679 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:21:05.413638 systemd-logind[1443]: New seat seat0. Jan 13 21:21:05.416579 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:21:05.418028 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:21:05.418055 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:21:05.420187 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:21:05.420207 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:21:05.431881 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:21:05.434195 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:21:05.435525 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:21:05.458109 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:21:05.458109 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:21:05.458109 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:21:05.464877 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Jan 13 21:21:05.465946 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:21:05.466175 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:21:05.466687 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:21:05.471021 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:21:05.472604 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:21:05.475160 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:21:05.531012 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:21:05.553343 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:21:05.561152 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:21:05.563299 systemd[1]: Started sshd@0-10.0.0.66:22-10.0.0.1:49060.service - OpenSSH per-connection server daemon (10.0.0.1:49060). Jan 13 21:21:05.568811 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:21:05.569066 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:21:05.574563 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:21:05.589135 containerd[1459]: time="2025-01-13T21:21:05.589056640Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:21:05.596137 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:21:05.605168 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:21:05.607359 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:21:05.608609 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:21:05.618047 containerd[1459]: time="2025-01-13T21:21:05.617933455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:05.619830 containerd[1459]: time="2025-01-13T21:21:05.619527745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:05.619830 containerd[1459]: time="2025-01-13T21:21:05.619568170Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:21:05.619830 containerd[1459]: time="2025-01-13T21:21:05.619587747Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:21:05.619830 containerd[1459]: time="2025-01-13T21:21:05.619786610Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:21:05.619830 containerd[1459]: time="2025-01-13T21:21:05.619802400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:05.619988 containerd[1459]: time="2025-01-13T21:21:05.619895003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:05.619988 containerd[1459]: time="2025-01-13T21:21:05.619910893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:05.620150 containerd[1459]: time="2025-01-13T21:21:05.620121578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:05.620150 containerd[1459]: time="2025-01-13T21:21:05.620141075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:05.620196 containerd[1459]: time="2025-01-13T21:21:05.620154751Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:05.620196 containerd[1459]: time="2025-01-13T21:21:05.620165551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:05.620274 containerd[1459]: time="2025-01-13T21:21:05.620257323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:05.620518 containerd[1459]: time="2025-01-13T21:21:05.620492454Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:05.620645 containerd[1459]: time="2025-01-13T21:21:05.620620995Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:05.620645 containerd[1459]: time="2025-01-13T21:21:05.620637416Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:21:05.620757 containerd[1459]: time="2025-01-13T21:21:05.620734488Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:21:05.620807 containerd[1459]: time="2025-01-13T21:21:05.620792016Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:21:05.627967 containerd[1459]: time="2025-01-13T21:21:05.627936234Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:21:05.628006 containerd[1459]: time="2025-01-13T21:21:05.627982240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:21:05.628006 containerd[1459]: time="2025-01-13T21:21:05.627997268Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:21:05.628042 containerd[1459]: time="2025-01-13T21:21:05.628010813Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:21:05.628042 containerd[1459]: time="2025-01-13T21:21:05.628033596Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:21:05.628326 containerd[1459]: time="2025-01-13T21:21:05.628175092Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:21:05.628435 containerd[1459]: time="2025-01-13T21:21:05.628409802Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:21:05.628551 containerd[1459]: time="2025-01-13T21:21:05.628526561Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:21:05.628551 containerd[1459]: time="2025-01-13T21:21:05.628545807Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:21:05.628593 containerd[1459]: time="2025-01-13T21:21:05.628559072Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:21:05.628593 containerd[1459]: time="2025-01-13T21:21:05.628573439Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:21:05.628593 containerd[1459]: time="2025-01-13T21:21:05.628586573Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:21:05.628651 containerd[1459]: time="2025-01-13T21:21:05.628598486Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:21:05.628651 containerd[1459]: time="2025-01-13T21:21:05.628611290Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:21:05.628651 containerd[1459]: time="2025-01-13T21:21:05.628625095Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:21:05.628651 containerd[1459]: time="2025-01-13T21:21:05.628637158Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:21:05.628651 containerd[1459]: time="2025-01-13T21:21:05.628649201Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:21:05.628738 containerd[1459]: time="2025-01-13T21:21:05.628661113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:21:05.628738 containerd[1459]: time="2025-01-13T21:21:05.628679688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.628738 containerd[1459]: time="2025-01-13T21:21:05.628693734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.628738 containerd[1459]: time="2025-01-13T21:21:05.628705727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.628738 containerd[1459]: time="2025-01-13T21:21:05.628717569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.628738 containerd[1459]: time="2025-01-13T21:21:05.628728980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.628738 containerd[1459]: time="2025-01-13T21:21:05.628741904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.628881 containerd[1459]: time="2025-01-13T21:21:05.628754378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.628881 containerd[1459]: time="2025-01-13T21:21:05.628776149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.628881 containerd[1459]: time="2025-01-13T21:21:05.628794533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.628881 containerd[1459]: time="2025-01-13T21:21:05.628813459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.628881 containerd[1459]: time="2025-01-13T21:21:05.628825712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.628881 containerd[1459]: time="2025-01-13T21:21:05.628837193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.628881 containerd[1459]: time="2025-01-13T21:21:05.628849847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.629006 containerd[1459]: time="2025-01-13T21:21:05.628886035Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:21:05.629006 containerd[1459]: time="2025-01-13T21:21:05.628905141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.629006 containerd[1459]: time="2025-01-13T21:21:05.628922283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.629006 containerd[1459]: time="2025-01-13T21:21:05.628933484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:21:05.629006 containerd[1459]: time="2025-01-13T21:21:05.628981714Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:21:05.629006 containerd[1459]: time="2025-01-13T21:21:05.628999978Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:21:05.629006 containerd[1459]: time="2025-01-13T21:21:05.629010458Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:21:05.629135 containerd[1459]: time="2025-01-13T21:21:05.629021789Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:21:05.629135 containerd[1459]: time="2025-01-13T21:21:05.629031528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.629135 containerd[1459]: time="2025-01-13T21:21:05.629043179Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:21:05.629135 containerd[1459]: time="2025-01-13T21:21:05.629052757Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:21:05.629135 containerd[1459]: time="2025-01-13T21:21:05.629062355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:21:05.629364 containerd[1459]: time="2025-01-13T21:21:05.629306123Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:21:05.629364 containerd[1459]: time="2025-01-13T21:21:05.629358421Z" level=info msg="Connect containerd service" Jan 13 21:21:05.629503 containerd[1459]: time="2025-01-13T21:21:05.629392174Z" level=info msg="using legacy CRI server" Jan 13 21:21:05.629503 containerd[1459]: time="2025-01-13T21:21:05.629399207Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:21:05.629503 containerd[1459]: time="2025-01-13T21:21:05.629473967Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:21:05.630069 containerd[1459]: time="2025-01-13T21:21:05.630038726Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:21:05.630523 containerd[1459]: time="2025-01-13T21:21:05.630189850Z" level=info msg="Start subscribing containerd event" Jan 13 21:21:05.630523 containerd[1459]: time="2025-01-13T21:21:05.630252067Z" level=info msg="Start recovering state" Jan 13 21:21:05.630523 containerd[1459]: time="2025-01-13T21:21:05.630320976Z" level=info msg="Start event monitor" Jan 13 21:21:05.630523 containerd[1459]: time="2025-01-13T21:21:05.630332467Z" level=info msg="Start snapshots syncer" Jan 13 21:21:05.630523 containerd[1459]: time="2025-01-13T21:21:05.630343909Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:21:05.630523 containerd[1459]: time="2025-01-13T21:21:05.630352725Z" level=info msg="Start streaming server" Jan 13 21:21:05.630523 containerd[1459]: time="2025-01-13T21:21:05.630434509Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:21:05.630523 containerd[1459]: time="2025-01-13T21:21:05.630497797Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:21:05.631081 containerd[1459]: time="2025-01-13T21:21:05.630578589Z" level=info msg="containerd successfully booted in 0.042476s" Jan 13 21:21:05.630660 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:21:05.635705 sshd[1510]: Accepted publickey for core from 10.0.0.1 port 49060 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:21:05.636947 sshd[1510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:05.644645 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:21:05.666061 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:21:05.669200 systemd-logind[1443]: New session 1 of user core. Jan 13 21:21:05.681741 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:21:05.693115 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:21:05.697185 (systemd)[1525]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:21:05.804070 tar[1456]: linux-amd64/LICENSE Jan 13 21:21:05.804151 tar[1456]: linux-amd64/README.md Jan 13 21:21:05.809677 systemd[1525]: Queued start job for default target default.target. Jan 13 21:21:05.814248 systemd[1525]: Created slice app.slice - User Application Slice. Jan 13 21:21:05.814274 systemd[1525]: Reached target paths.target - Paths. Jan 13 21:21:05.814289 systemd[1525]: Reached target timers.target - Timers. Jan 13 21:21:05.815751 systemd[1525]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:21:05.818988 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:21:05.826983 systemd[1525]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:21:05.827155 systemd[1525]: Reached target sockets.target - Sockets. Jan 13 21:21:05.827176 systemd[1525]: Reached target basic.target - Basic System. Jan 13 21:21:05.827225 systemd[1525]: Reached target default.target - Main User Target. Jan 13 21:21:05.827271 systemd[1525]: Startup finished in 122ms. Jan 13 21:21:05.827537 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:21:05.829982 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:21:05.889830 systemd[1]: Started sshd@1-10.0.0.66:22-10.0.0.1:49064.service - OpenSSH per-connection server daemon (10.0.0.1:49064). Jan 13 21:21:05.922036 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 49064 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:21:05.923465 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:05.927444 systemd-logind[1443]: New session 2 of user core. Jan 13 21:21:05.947996 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:21:06.005073 sshd[1539]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:06.021713 systemd[1]: sshd@1-10.0.0.66:22-10.0.0.1:49064.service: Deactivated successfully. Jan 13 21:21:06.023292 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:21:06.024764 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:21:06.025990 systemd[1]: Started sshd@2-10.0.0.66:22-10.0.0.1:49080.service - OpenSSH per-connection server daemon (10.0.0.1:49080). Jan 13 21:21:06.028121 systemd-logind[1443]: Removed session 2. Jan 13 21:21:06.059325 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 49080 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:21:06.060960 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:06.064817 systemd-logind[1443]: New session 3 of user core. Jan 13 21:21:06.074012 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:21:06.130893 sshd[1546]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:06.135596 systemd[1]: sshd@2-10.0.0.66:22-10.0.0.1:49080.service: Deactivated successfully. Jan 13 21:21:06.137560 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:21:06.138274 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:21:06.139365 systemd-logind[1443]: Removed session 3. Jan 13 21:21:06.455112 systemd-networkd[1390]: eth0: Gained IPv6LL Jan 13 21:21:06.458352 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:21:06.460165 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:21:06.475078 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:21:06.477773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:06.480209 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:21:06.501647 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:21:06.503303 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:21:06.503515 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:21:06.506753 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:21:07.095269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:07.097186 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:21:07.098919 systemd[1]: Startup finished in 724ms (kernel) + 5.574s (initrd) + 4.064s (userspace) = 10.363s. Jan 13 21:21:07.118250 (kubelet)[1574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:21:07.559355 kubelet[1574]: E0113 21:21:07.559186 1574 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:21:07.563557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:21:07.563780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:21:16.141830 systemd[1]: Started sshd@3-10.0.0.66:22-10.0.0.1:38450.service - OpenSSH per-connection server daemon (10.0.0.1:38450). Jan 13 21:21:16.171827 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 38450 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:21:16.173287 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:16.177190 systemd-logind[1443]: New session 4 of user core. Jan 13 21:21:16.186989 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:21:16.241293 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:16.252796 systemd[1]: sshd@3-10.0.0.66:22-10.0.0.1:38450.service: Deactivated successfully. Jan 13 21:21:16.254673 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:21:16.256321 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:21:16.257601 systemd[1]: Started sshd@4-10.0.0.66:22-10.0.0.1:38458.service - OpenSSH per-connection server daemon (10.0.0.1:38458). Jan 13 21:21:16.258680 systemd-logind[1443]: Removed session 4. Jan 13 21:21:16.301455 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 38458 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:21:16.303233 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:16.307281 systemd-logind[1443]: New session 5 of user core. Jan 13 21:21:16.316971 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:21:16.368041 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:16.378420 systemd[1]: sshd@4-10.0.0.66:22-10.0.0.1:38458.service: Deactivated successfully. Jan 13 21:21:16.379799 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:21:16.381178 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:21:16.390218 systemd[1]: Started sshd@5-10.0.0.66:22-10.0.0.1:38462.service - OpenSSH per-connection server daemon (10.0.0.1:38462). Jan 13 21:21:16.391162 systemd-logind[1443]: Removed session 5. Jan 13 21:21:16.415902 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 38462 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:21:16.417413 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:16.421086 systemd-logind[1443]: New session 6 of user core. Jan 13 21:21:16.441974 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:21:16.496016 sshd[1602]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:16.508445 systemd[1]: sshd@5-10.0.0.66:22-10.0.0.1:38462.service: Deactivated successfully. Jan 13 21:21:16.510474 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:21:16.512171 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:21:16.520074 systemd[1]: Started sshd@6-10.0.0.66:22-10.0.0.1:38464.service - OpenSSH per-connection server daemon (10.0.0.1:38464). Jan 13 21:21:16.520834 systemd-logind[1443]: Removed session 6. Jan 13 21:21:16.545430 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 38464 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:21:16.547015 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:16.550740 systemd-logind[1443]: New session 7 of user core. Jan 13 21:21:16.565997 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:21:16.626291 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:21:16.626722 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:21:16.650494 sudo[1612]: pam_unix(sudo:session): session closed for user root Jan 13 21:21:16.652800 sshd[1609]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:16.670976 systemd[1]: sshd@6-10.0.0.66:22-10.0.0.1:38464.service: Deactivated successfully. Jan 13 21:21:16.672929 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:21:16.674416 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:21:16.675839 systemd[1]: Started sshd@7-10.0.0.66:22-10.0.0.1:38466.service - OpenSSH per-connection server daemon (10.0.0.1:38466). Jan 13 21:21:16.676548 systemd-logind[1443]: Removed session 7. Jan 13 21:21:16.707248 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 38466 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:21:16.708825 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:16.712970 systemd-logind[1443]: New session 8 of user core. Jan 13 21:21:16.733170 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:21:16.786897 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:21:16.787213 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:21:16.790532 sudo[1621]: pam_unix(sudo:session): session closed for user root Jan 13 21:21:16.796641 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:21:16.797008 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:21:16.820288 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:21:16.821716 auditctl[1624]: No rules Jan 13 21:21:16.822157 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:21:16.822404 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:21:16.825338 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:21:16.856042 augenrules[1642]: No rules Jan 13 21:21:16.858153 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:21:16.859555 sudo[1620]: pam_unix(sudo:session): session closed for user root Jan 13 21:21:16.861717 sshd[1617]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:16.872635 systemd[1]: sshd@7-10.0.0.66:22-10.0.0.1:38466.service: Deactivated successfully. Jan 13 21:21:16.874605 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:21:16.876316 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:21:16.893289 systemd[1]: Started sshd@8-10.0.0.66:22-10.0.0.1:38470.service - OpenSSH per-connection server daemon (10.0.0.1:38470). Jan 13 21:21:16.894259 systemd-logind[1443]: Removed session 8. Jan 13 21:21:16.919544 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 38470 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:21:16.921174 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:16.925300 systemd-logind[1443]: New session 9 of user core. Jan 13 21:21:16.935117 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:21:16.990230 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:21:16.990678 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:21:17.288108 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:21:17.288256 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:21:17.554824 dockerd[1671]: time="2025-01-13T21:21:17.554661330Z" level=info msg="Starting up" Jan 13 21:21:17.814016 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:21:17.823995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:17.984433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:17.989527 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:21:18.042303 kubelet[1702]: E0113 21:21:18.042234 1702 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:21:18.049947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:21:18.050145 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:21:18.332276 dockerd[1671]: time="2025-01-13T21:21:18.332217446Z" level=info msg="Loading containers: start." Jan 13 21:21:18.808883 kernel: Initializing XFRM netlink socket Jan 13 21:21:18.889740 systemd-networkd[1390]: docker0: Link UP Jan 13 21:21:18.912481 dockerd[1671]: time="2025-01-13T21:21:18.912436103Z" level=info msg="Loading containers: done." Jan 13 21:21:18.927761 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2619821623-merged.mount: Deactivated successfully. Jan 13 21:21:18.929481 dockerd[1671]: time="2025-01-13T21:21:18.929438258Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:21:18.929569 dockerd[1671]: time="2025-01-13T21:21:18.929545459Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:21:18.929693 dockerd[1671]: time="2025-01-13T21:21:18.929667007Z" level=info msg="Daemon has completed initialization" Jan 13 21:21:18.968059 dockerd[1671]: time="2025-01-13T21:21:18.967984726Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:21:18.968242 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:21:19.672684 containerd[1459]: time="2025-01-13T21:21:19.672640277Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 21:21:20.273841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1206584525.mount: Deactivated successfully. Jan 13 21:21:21.350047 containerd[1459]: time="2025-01-13T21:21:21.349987953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:21.351060 containerd[1459]: time="2025-01-13T21:21:21.350972649Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Jan 13 21:21:21.351964 containerd[1459]: time="2025-01-13T21:21:21.351937268Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:21.354664 containerd[1459]: time="2025-01-13T21:21:21.354614469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:21.356319 containerd[1459]: time="2025-01-13T21:21:21.356279792Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 1.683597166s" Jan 13 21:21:21.356375 containerd[1459]: time="2025-01-13T21:21:21.356322612Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 21:21:21.384103 containerd[1459]: time="2025-01-13T21:21:21.383968328Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 21:21:23.777314 containerd[1459]: time="2025-01-13T21:21:23.777259273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:23.778348 containerd[1459]: time="2025-01-13T21:21:23.778301828Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Jan 13 21:21:23.779414 containerd[1459]: time="2025-01-13T21:21:23.779383817Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:23.782069 containerd[1459]: time="2025-01-13T21:21:23.782043555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:23.782906 containerd[1459]: time="2025-01-13T21:21:23.782879703Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.398681645s" Jan 13 21:21:23.782953 containerd[1459]: time="2025-01-13T21:21:23.782908166Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 21:21:23.805401 containerd[1459]: time="2025-01-13T21:21:23.805361264Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 21:21:25.602761 containerd[1459]: time="2025-01-13T21:21:25.602695708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:25.603352 containerd[1459]: time="2025-01-13T21:21:25.603291305Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Jan 13 21:21:25.604440 containerd[1459]: time="2025-01-13T21:21:25.604379946Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:25.607254 containerd[1459]: time="2025-01-13T21:21:25.607212909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:25.608435 containerd[1459]: time="2025-01-13T21:21:25.608398452Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.802994749s" Jan 13 21:21:25.608473 containerd[1459]: time="2025-01-13T21:21:25.608437566Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 21:21:25.632409 containerd[1459]: time="2025-01-13T21:21:25.632334331Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:21:26.830621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907457646.mount: Deactivated successfully. Jan 13 21:21:27.195105 containerd[1459]: time="2025-01-13T21:21:27.194969930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:27.195811 containerd[1459]: time="2025-01-13T21:21:27.195739874Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Jan 13 21:21:27.196777 containerd[1459]: time="2025-01-13T21:21:27.196741903Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:27.198783 containerd[1459]: time="2025-01-13T21:21:27.198747625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:27.199294 containerd[1459]: time="2025-01-13T21:21:27.199259404Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.566878657s" Jan 13 21:21:27.199294 containerd[1459]: time="2025-01-13T21:21:27.199291445Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 21:21:27.226735 containerd[1459]: time="2025-01-13T21:21:27.226696479Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:21:27.749661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2638455418.mount: Deactivated successfully. Jan 13 21:21:28.300425 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:21:28.311017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:28.452534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:28.457221 (kubelet)[1962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:21:28.597154 kubelet[1962]: E0113 21:21:28.597008 1962 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:21:28.601428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:21:28.601654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:21:29.179922 containerd[1459]: time="2025-01-13T21:21:29.179847985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:29.180779 containerd[1459]: time="2025-01-13T21:21:29.180704371Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:21:29.181927 containerd[1459]: time="2025-01-13T21:21:29.181900534Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:29.184823 containerd[1459]: time="2025-01-13T21:21:29.184777760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:29.185836 containerd[1459]: time="2025-01-13T21:21:29.185798554Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.95906718s" Jan 13 21:21:29.185836 containerd[1459]: time="2025-01-13T21:21:29.185830755Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:21:29.205294 containerd[1459]: time="2025-01-13T21:21:29.205247488Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:21:29.880693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2891928320.mount: Deactivated successfully. Jan 13 21:21:29.887872 containerd[1459]: time="2025-01-13T21:21:29.887805819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:29.888666 containerd[1459]: time="2025-01-13T21:21:29.888594077Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 21:21:29.889946 containerd[1459]: time="2025-01-13T21:21:29.889907009Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:29.892187 containerd[1459]: time="2025-01-13T21:21:29.892154093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:29.892779 containerd[1459]: time="2025-01-13T21:21:29.892730624Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 687.441859ms" Jan 13 21:21:29.892779 containerd[1459]: time="2025-01-13T21:21:29.892773164Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:21:29.916625 containerd[1459]: time="2025-01-13T21:21:29.916586753Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 21:21:30.766790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1617017570.mount: Deactivated successfully. Jan 13 21:21:32.343998 containerd[1459]: time="2025-01-13T21:21:32.343934126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:32.345007 containerd[1459]: time="2025-01-13T21:21:32.344960952Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 13 21:21:32.346111 containerd[1459]: time="2025-01-13T21:21:32.346082926Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:32.348796 containerd[1459]: time="2025-01-13T21:21:32.348761078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:21:32.349866 containerd[1459]: time="2025-01-13T21:21:32.349818341Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.433192604s" Jan 13 21:21:32.349922 containerd[1459]: time="2025-01-13T21:21:32.349851223Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 21:21:34.825561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:34.836034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:34.851194 systemd[1]: Reloading requested from client PID 2147 ('systemctl') (unit session-9.scope)... Jan 13 21:21:34.851211 systemd[1]: Reloading... Jan 13 21:21:34.926096 zram_generator::config[2186]: No configuration found. Jan 13 21:21:35.150063 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:21:35.225045 systemd[1]: Reloading finished in 373 ms. Jan 13 21:21:35.270115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:35.273416 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:35.276566 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:21:35.276810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:35.278417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:35.421364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:35.425417 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:21:35.460285 kubelet[2236]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:21:35.460285 kubelet[2236]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:21:35.460285 kubelet[2236]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:21:35.461250 kubelet[2236]: I0113 21:21:35.461203 2236 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:21:35.736940 kubelet[2236]: I0113 21:21:35.736815 2236 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:21:35.736940 kubelet[2236]: I0113 21:21:35.736851 2236 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:21:35.737118 kubelet[2236]: I0113 21:21:35.737093 2236 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:21:35.751983 kubelet[2236]: I0113 21:21:35.751762 2236 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:21:35.752186 kubelet[2236]: E0113 21:21:35.752161 2236 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:35.764129 kubelet[2236]: I0113 21:21:35.764100 2236 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:21:35.767001 kubelet[2236]: I0113 21:21:35.766959 2236 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:21:35.767181 kubelet[2236]: I0113 21:21:35.766994 2236 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:21:35.767596 kubelet[2236]: I0113 21:21:35.767573 2236 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:21:35.767596 kubelet[2236]: I0113 21:21:35.767589 2236 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:21:35.767734 kubelet[2236]: I0113 21:21:35.767714 2236 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:21:35.768297 kubelet[2236]: I0113 21:21:35.768276 2236 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:21:35.768297 kubelet[2236]: I0113 21:21:35.768291 2236 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:21:35.768353 kubelet[2236]: I0113 21:21:35.768321 2236 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:21:35.768353 kubelet[2236]: I0113 21:21:35.768340 2236 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:21:35.770235 kubelet[2236]: W0113 21:21:35.770188 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:35.770276 kubelet[2236]: E0113 21:21:35.770249 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:35.771425 kubelet[2236]: W0113 21:21:35.771389 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:35.771425 kubelet[2236]: E0113 21:21:35.771422 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:35.771903 kubelet[2236]: I0113 21:21:35.771886 2236 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:21:35.773170 kubelet[2236]: I0113 21:21:35.773153 2236 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:21:35.773226 kubelet[2236]: W0113 21:21:35.773201 2236 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:21:35.773774 kubelet[2236]: I0113 21:21:35.773750 2236 server.go:1264] "Started kubelet" Jan 13 21:21:35.774624 kubelet[2236]: I0113 21:21:35.774565 2236 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:21:35.775638 kubelet[2236]: I0113 21:21:35.775498 2236 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:21:35.775638 kubelet[2236]: I0113 21:21:35.775524 2236 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:21:35.775638 kubelet[2236]: I0113 21:21:35.775553 2236 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:21:35.776770 kubelet[2236]: I0113 21:21:35.776741 2236 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:21:35.779175 kubelet[2236]: E0113 21:21:35.779049 2236 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.66:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.66:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d6541523b08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:21:35.773735688 +0000 UTC m=+0.344738619,LastTimestamp:2025-01-13 21:21:35.773735688 +0000 UTC m=+0.344738619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:21:35.779297 kubelet[2236]: I0113 21:21:35.779285 2236 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:21:35.779375 kubelet[2236]: I0113 21:21:35.779364 2236 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:21:35.779427 kubelet[2236]: I0113 21:21:35.779417 2236 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:21:35.779666 kubelet[2236]: W0113 21:21:35.779629 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:35.779695 kubelet[2236]: E0113 21:21:35.779666 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:35.780283 kubelet[2236]: E0113 21:21:35.780243 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="200ms" Jan 13 21:21:35.780574 kubelet[2236]: E0113 21:21:35.780396 2236 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:21:35.780720 kubelet[2236]: I0113 21:21:35.780703 2236 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:21:35.780759 kubelet[2236]: E0113 21:21:35.780737 2236 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:21:35.780814 kubelet[2236]: I0113 21:21:35.780795 2236 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:21:35.781572 kubelet[2236]: I0113 21:21:35.781556 2236 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:21:35.795174 kubelet[2236]: I0113 21:21:35.795130 2236 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:21:35.795174 kubelet[2236]: I0113 21:21:35.795150 2236 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:21:35.795174 kubelet[2236]: I0113 21:21:35.795165 2236 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:21:35.795768 kubelet[2236]: I0113 21:21:35.795717 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:21:35.797192 kubelet[2236]: I0113 21:21:35.797147 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:21:35.797192 kubelet[2236]: I0113 21:21:35.797178 2236 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:21:35.797192 kubelet[2236]: I0113 21:21:35.797193 2236 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:21:35.797284 kubelet[2236]: E0113 21:21:35.797229 2236 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:21:35.797674 kubelet[2236]: W0113 21:21:35.797637 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:35.797721 kubelet[2236]: E0113 21:21:35.797682 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:35.882483 kubelet[2236]: I0113 21:21:35.882443 2236 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:21:35.882683 kubelet[2236]: E0113 21:21:35.882663 2236 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Jan 13 21:21:35.897944 kubelet[2236]: E0113 21:21:35.897882 2236 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:21:35.981224 kubelet[2236]: E0113 21:21:35.981172 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="400ms" Jan 13 21:21:36.084809 kubelet[2236]: I0113 21:21:36.084709 2236 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:21:36.085070 kubelet[2236]: E0113 21:21:36.084993 2236 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Jan 13 21:21:36.098139 kubelet[2236]: E0113 21:21:36.098107 2236 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:21:36.246437 kubelet[2236]: I0113 21:21:36.246382 2236 policy_none.go:49] "None policy: Start" Jan 13 21:21:36.247267 kubelet[2236]: I0113 21:21:36.247249 2236 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:21:36.247267 kubelet[2236]: I0113 21:21:36.247272 2236 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:21:36.252647 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:21:36.277162 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:21:36.289226 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:21:36.290306 kubelet[2236]: I0113 21:21:36.290274 2236 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:21:36.290548 kubelet[2236]: I0113 21:21:36.290508 2236 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:21:36.290645 kubelet[2236]: I0113 21:21:36.290623 2236 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:21:36.291646 kubelet[2236]: E0113 21:21:36.291618 2236 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:21:36.382233 kubelet[2236]: E0113 21:21:36.382079 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="800ms" Jan 13 21:21:36.486875 kubelet[2236]: I0113 21:21:36.486826 2236 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:21:36.487219 kubelet[2236]: E0113 21:21:36.487160 2236 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Jan 13 21:21:36.498372 kubelet[2236]: I0113 21:21:36.498285 2236 topology_manager.go:215] "Topology Admit Handler" podUID="563bc4a6bd7f44d5d28c174257e9f289" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:21:36.499383 kubelet[2236]: I0113 21:21:36.499352 2236 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:21:36.500085 kubelet[2236]: I0113 21:21:36.500066 2236 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:21:36.506478 systemd[1]: Created slice kubepods-burstable-pod563bc4a6bd7f44d5d28c174257e9f289.slice - libcontainer container kubepods-burstable-pod563bc4a6bd7f44d5d28c174257e9f289.slice. Jan 13 21:21:36.520020 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Jan 13 21:21:36.523660 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Jan 13 21:21:36.585479 kubelet[2236]: I0113 21:21:36.585418 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/563bc4a6bd7f44d5d28c174257e9f289-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"563bc4a6bd7f44d5d28c174257e9f289\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:36.585479 kubelet[2236]: I0113 21:21:36.585457 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/563bc4a6bd7f44d5d28c174257e9f289-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"563bc4a6bd7f44d5d28c174257e9f289\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:36.585479 kubelet[2236]: I0113 21:21:36.585475 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:36.585479 kubelet[2236]: I0113 21:21:36.585489 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:36.585705 kubelet[2236]: I0113 21:21:36.585505 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:21:36.585705 kubelet[2236]: I0113 21:21:36.585518 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/563bc4a6bd7f44d5d28c174257e9f289-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"563bc4a6bd7f44d5d28c174257e9f289\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:36.585705 kubelet[2236]: I0113 21:21:36.585531 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:36.585705 kubelet[2236]: I0113 21:21:36.585545 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:36.585705 kubelet[2236]: I0113 21:21:36.585560 2236 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:36.726424 kubelet[2236]: W0113 21:21:36.726240 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:36.726424 kubelet[2236]: E0113 21:21:36.726334 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:36.798104 kubelet[2236]: W0113 21:21:36.798054 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:36.798104 kubelet[2236]: E0113 21:21:36.798096 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:36.819275 kubelet[2236]: E0113 21:21:36.819240 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:36.819810 containerd[1459]: time="2025-01-13T21:21:36.819772511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:563bc4a6bd7f44d5d28c174257e9f289,Namespace:kube-system,Attempt:0,}" Jan 13 21:21:36.823006 kubelet[2236]: E0113 21:21:36.822982 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:36.823431 containerd[1459]: time="2025-01-13T21:21:36.823377541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Jan 13 21:21:36.825650 kubelet[2236]: E0113 21:21:36.825630 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:36.825966 containerd[1459]: time="2025-01-13T21:21:36.825932803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Jan 13 21:21:36.944192 kubelet[2236]: W0113 21:21:36.944118 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:36.944192 kubelet[2236]: E0113 21:21:36.944180 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:37.138690 kubelet[2236]: W0113 21:21:37.138525 2236 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:37.138690 kubelet[2236]: E0113 21:21:37.138614 2236 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Jan 13 21:21:37.183141 kubelet[2236]: E0113 21:21:37.183080 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="1.6s" Jan 13 21:21:37.289087 kubelet[2236]: I0113 21:21:37.289055 2236 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:21:37.289438 kubelet[2236]: E0113 21:21:37.289393 2236 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Jan 13 21:21:37.383984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount503039302.mount: Deactivated successfully. Jan 13 21:21:37.393263 containerd[1459]: time="2025-01-13T21:21:37.393131123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:21:37.394113 containerd[1459]: time="2025-01-13T21:21:37.394041600Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:21:37.397433 containerd[1459]: time="2025-01-13T21:21:37.397377206Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:21:37.398381 containerd[1459]: time="2025-01-13T21:21:37.398340773Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:21:37.399559 containerd[1459]: time="2025-01-13T21:21:37.399500888Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:21:37.400460 containerd[1459]: time="2025-01-13T21:21:37.400376190Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:21:37.401426 containerd[1459]: time="2025-01-13T21:21:37.401387897Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:21:37.403080 containerd[1459]: time="2025-01-13T21:21:37.403047319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:21:37.405205 containerd[1459]: time="2025-01-13T21:21:37.405163347Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 585.316367ms" Jan 13 21:21:37.406055 containerd[1459]: time="2025-01-13T21:21:37.406021356Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 582.540271ms" Jan 13 21:21:37.409225 containerd[1459]: time="2025-01-13T21:21:37.409186452Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 583.20133ms" Jan 13 21:21:37.526949 containerd[1459]: time="2025-01-13T21:21:37.526804428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:21:37.526949 containerd[1459]: time="2025-01-13T21:21:37.526886812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:21:37.526949 containerd[1459]: time="2025-01-13T21:21:37.526900688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:37.526949 containerd[1459]: time="2025-01-13T21:21:37.526815088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:21:37.527145 containerd[1459]: time="2025-01-13T21:21:37.526974056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:37.527145 containerd[1459]: time="2025-01-13T21:21:37.526979957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:21:37.527145 containerd[1459]: time="2025-01-13T21:21:37.527049207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:37.527355 containerd[1459]: time="2025-01-13T21:21:37.527230968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:37.531257 containerd[1459]: time="2025-01-13T21:21:37.531181346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:21:37.531299 containerd[1459]: time="2025-01-13T21:21:37.531277617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:21:37.531339 containerd[1459]: time="2025-01-13T21:21:37.531307553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:37.531506 containerd[1459]: time="2025-01-13T21:21:37.531451593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:37.549018 systemd[1]: Started cri-containerd-9192628850abdbb7aa1f4f8e7261453e00b0b1ace0a8c6b3e98bf69951058484.scope - libcontainer container 9192628850abdbb7aa1f4f8e7261453e00b0b1ace0a8c6b3e98bf69951058484. Jan 13 21:21:37.553146 systemd[1]: Started cri-containerd-d05761d0efef7f86dd5e8bbdb2234f2153be1696183ca35630f2524bc347f2e6.scope - libcontainer container d05761d0efef7f86dd5e8bbdb2234f2153be1696183ca35630f2524bc347f2e6. Jan 13 21:21:37.555325 systemd[1]: Started cri-containerd-fde00230fde8fdcce4414b798314e527ee6580bb9c6a9ac445a3a50f2e7f42d2.scope - libcontainer container fde00230fde8fdcce4414b798314e527ee6580bb9c6a9ac445a3a50f2e7f42d2. Jan 13 21:21:37.594242 containerd[1459]: time="2025-01-13T21:21:37.594155074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9192628850abdbb7aa1f4f8e7261453e00b0b1ace0a8c6b3e98bf69951058484\"" Jan 13 21:21:37.597137 kubelet[2236]: E0113 21:21:37.596751 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:37.597791 containerd[1459]: time="2025-01-13T21:21:37.597119604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"fde00230fde8fdcce4414b798314e527ee6580bb9c6a9ac445a3a50f2e7f42d2\"" Jan 13 21:21:37.598540 kubelet[2236]: E0113 21:21:37.598139 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:37.600399 containerd[1459]: time="2025-01-13T21:21:37.600366874Z" level=info msg="CreateContainer within sandbox \"fde00230fde8fdcce4414b798314e527ee6580bb9c6a9ac445a3a50f2e7f42d2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:21:37.600560 containerd[1459]: time="2025-01-13T21:21:37.600538315Z" level=info msg="CreateContainer within sandbox \"9192628850abdbb7aa1f4f8e7261453e00b0b1ace0a8c6b3e98bf69951058484\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:21:37.602791 containerd[1459]: time="2025-01-13T21:21:37.602765772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:563bc4a6bd7f44d5d28c174257e9f289,Namespace:kube-system,Attempt:0,} returns sandbox id \"d05761d0efef7f86dd5e8bbdb2234f2153be1696183ca35630f2524bc347f2e6\"" Jan 13 21:21:37.603408 kubelet[2236]: E0113 21:21:37.603385 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:37.605382 containerd[1459]: time="2025-01-13T21:21:37.605348145Z" level=info msg="CreateContainer within sandbox \"d05761d0efef7f86dd5e8bbdb2234f2153be1696183ca35630f2524bc347f2e6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:21:37.626381 containerd[1459]: time="2025-01-13T21:21:37.626332338Z" level=info msg="CreateContainer within sandbox \"fde00230fde8fdcce4414b798314e527ee6580bb9c6a9ac445a3a50f2e7f42d2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3150aeb12395eb4c47ecede14747a43bfc0ca15278876c530500ecc2f3672798\"" Jan 13 21:21:37.626943 containerd[1459]: time="2025-01-13T21:21:37.626813961Z" level=info msg="StartContainer for \"3150aeb12395eb4c47ecede14747a43bfc0ca15278876c530500ecc2f3672798\"" Jan 13 21:21:37.632373 containerd[1459]: time="2025-01-13T21:21:37.632329254Z" level=info msg="CreateContainer within sandbox \"9192628850abdbb7aa1f4f8e7261453e00b0b1ace0a8c6b3e98bf69951058484\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"24264a31d603d4458c2a14e8d2e792bd559638adc6c7690e321c8da9016b110d\"" Jan 13 21:21:37.633047 containerd[1459]: time="2025-01-13T21:21:37.632975196Z" level=info msg="StartContainer for \"24264a31d603d4458c2a14e8d2e792bd559638adc6c7690e321c8da9016b110d\"" Jan 13 21:21:37.635971 containerd[1459]: time="2025-01-13T21:21:37.635927783Z" level=info msg="CreateContainer within sandbox \"d05761d0efef7f86dd5e8bbdb2234f2153be1696183ca35630f2524bc347f2e6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d5f47c13b4641cb036d4293558eb7d18c1a51eec70b43c0c3b233f7b89d962c3\"" Jan 13 21:21:37.636516 containerd[1459]: time="2025-01-13T21:21:37.636426639Z" level=info msg="StartContainer for \"d5f47c13b4641cb036d4293558eb7d18c1a51eec70b43c0c3b233f7b89d962c3\"" Jan 13 21:21:37.655330 systemd[1]: Started cri-containerd-3150aeb12395eb4c47ecede14747a43bfc0ca15278876c530500ecc2f3672798.scope - libcontainer container 3150aeb12395eb4c47ecede14747a43bfc0ca15278876c530500ecc2f3672798. Jan 13 21:21:37.671019 systemd[1]: Started cri-containerd-24264a31d603d4458c2a14e8d2e792bd559638adc6c7690e321c8da9016b110d.scope - libcontainer container 24264a31d603d4458c2a14e8d2e792bd559638adc6c7690e321c8da9016b110d. Jan 13 21:21:37.672731 systemd[1]: Started cri-containerd-d5f47c13b4641cb036d4293558eb7d18c1a51eec70b43c0c3b233f7b89d962c3.scope - libcontainer container d5f47c13b4641cb036d4293558eb7d18c1a51eec70b43c0c3b233f7b89d962c3. Jan 13 21:21:38.193642 containerd[1459]: time="2025-01-13T21:21:38.193553478Z" level=info msg="StartContainer for \"24264a31d603d4458c2a14e8d2e792bd559638adc6c7690e321c8da9016b110d\" returns successfully" Jan 13 21:21:38.194601 containerd[1459]: time="2025-01-13T21:21:38.193743233Z" level=info msg="StartContainer for \"d5f47c13b4641cb036d4293558eb7d18c1a51eec70b43c0c3b233f7b89d962c3\" returns successfully" Jan 13 21:21:38.194601 containerd[1459]: time="2025-01-13T21:21:38.193766428Z" level=info msg="StartContainer for \"3150aeb12395eb4c47ecede14747a43bfc0ca15278876c530500ecc2f3672798\" returns successfully" Jan 13 21:21:38.199524 kubelet[2236]: E0113 21:21:38.199505 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:38.210140 kubelet[2236]: E0113 21:21:38.210107 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:38.828923 kubelet[2236]: E0113 21:21:38.828878 2236 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 21:21:38.890683 kubelet[2236]: I0113 21:21:38.890657 2236 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:21:38.897186 kubelet[2236]: I0113 21:21:38.897168 2236 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:21:38.904114 kubelet[2236]: E0113 21:21:38.904068 2236 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:21:39.005039 kubelet[2236]: E0113 21:21:39.004990 2236 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:21:39.207743 kubelet[2236]: E0113 21:21:39.207638 2236 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 13 21:21:39.208300 kubelet[2236]: E0113 21:21:39.207639 2236 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:39.208300 kubelet[2236]: E0113 21:21:39.207875 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:39.208300 kubelet[2236]: E0113 21:21:39.208235 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:39.772358 kubelet[2236]: I0113 21:21:39.772315 2236 apiserver.go:52] "Watching apiserver" Jan 13 21:21:39.779478 kubelet[2236]: I0113 21:21:39.779430 2236 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:21:40.255563 kubelet[2236]: E0113 21:21:40.255437 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:40.856127 systemd[1]: Reloading requested from client PID 2509 ('systemctl') (unit session-9.scope)... Jan 13 21:21:40.856144 systemd[1]: Reloading... Jan 13 21:21:40.943138 zram_generator::config[2554]: No configuration found. Jan 13 21:21:41.042889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:21:41.152831 systemd[1]: Reloading finished in 296 ms. Jan 13 21:21:41.206953 kubelet[2236]: E0113 21:21:41.206515 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:41.207029 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:41.221474 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:21:41.221734 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:41.229231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:41.372918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:41.378074 (kubelet)[2593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:21:41.420387 kubelet[2593]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:21:41.420717 kubelet[2593]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:21:41.420717 kubelet[2593]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:21:41.420797 kubelet[2593]: I0113 21:21:41.420765 2593 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:21:41.425051 kubelet[2593]: I0113 21:21:41.425026 2593 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:21:41.425051 kubelet[2593]: I0113 21:21:41.425044 2593 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:21:41.425201 kubelet[2593]: I0113 21:21:41.425188 2593 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:21:41.426338 kubelet[2593]: I0113 21:21:41.426320 2593 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:21:41.427331 kubelet[2593]: I0113 21:21:41.427298 2593 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:21:41.433997 kubelet[2593]: I0113 21:21:41.433975 2593 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:21:41.434211 kubelet[2593]: I0113 21:21:41.434182 2593 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:21:41.434346 kubelet[2593]: I0113 21:21:41.434204 2593 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:21:41.434421 kubelet[2593]: I0113 21:21:41.434360 2593 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:21:41.434421 kubelet[2593]: I0113 21:21:41.434369 2593 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:21:41.434421 kubelet[2593]: I0113 21:21:41.434408 2593 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:21:41.434501 kubelet[2593]: I0113 21:21:41.434481 2593 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:21:41.434501 kubelet[2593]: I0113 21:21:41.434490 2593 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:21:41.434549 kubelet[2593]: I0113 21:21:41.434508 2593 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:21:41.434549 kubelet[2593]: I0113 21:21:41.434526 2593 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:21:41.435378 kubelet[2593]: I0113 21:21:41.435331 2593 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:21:41.435488 kubelet[2593]: I0113 21:21:41.435474 2593 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:21:41.435806 kubelet[2593]: I0113 21:21:41.435780 2593 server.go:1264] "Started kubelet" Jan 13 21:21:41.441594 kubelet[2593]: I0113 21:21:41.441563 2593 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:21:41.444412 kubelet[2593]: I0113 21:21:41.444371 2593 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:21:41.445873 kubelet[2593]: I0113 21:21:41.445832 2593 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:21:41.446520 kubelet[2593]: I0113 21:21:41.446489 2593 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:21:41.447168 kubelet[2593]: I0113 21:21:41.447151 2593 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:21:41.447355 kubelet[2593]: I0113 21:21:41.447324 2593 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:21:41.447470 kubelet[2593]: I0113 21:21:41.447429 2593 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:21:41.447675 kubelet[2593]: I0113 21:21:41.447658 2593 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:21:41.449339 kubelet[2593]: E0113 21:21:41.449250 2593 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:21:41.450536 kubelet[2593]: I0113 21:21:41.450499 2593 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:21:41.450605 kubelet[2593]: I0113 21:21:41.450589 2593 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:21:41.452772 kubelet[2593]: I0113 21:21:41.452430 2593 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:21:41.456262 kubelet[2593]: I0113 21:21:41.456200 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:21:41.457411 kubelet[2593]: I0113 21:21:41.457380 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:21:41.457411 kubelet[2593]: I0113 21:21:41.457413 2593 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:21:41.457476 kubelet[2593]: I0113 21:21:41.457438 2593 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:21:41.457502 kubelet[2593]: E0113 21:21:41.457488 2593 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:21:41.485926 kubelet[2593]: I0113 21:21:41.485900 2593 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:21:41.485926 kubelet[2593]: I0113 21:21:41.485917 2593 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:21:41.485926 kubelet[2593]: I0113 21:21:41.485935 2593 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:21:41.486089 kubelet[2593]: I0113 21:21:41.486059 2593 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:21:41.486089 kubelet[2593]: I0113 21:21:41.486068 2593 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:21:41.486089 kubelet[2593]: I0113 21:21:41.486085 2593 policy_none.go:49] "None policy: Start" Jan 13 21:21:41.486768 kubelet[2593]: I0113 21:21:41.486641 2593 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:21:41.486768 kubelet[2593]: I0113 21:21:41.486659 2593 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:21:41.486984 kubelet[2593]: I0113 21:21:41.486801 2593 state_mem.go:75] "Updated machine memory state" Jan 13 21:21:41.490686 kubelet[2593]: I0113 21:21:41.490657 2593 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:21:41.490939 kubelet[2593]: I0113 21:21:41.490833 2593 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:21:41.491007 kubelet[2593]: I0113 21:21:41.490991 2593 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:21:41.550911 kubelet[2593]: I0113 21:21:41.550872 2593 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:21:41.556625 kubelet[2593]: I0113 21:21:41.556595 2593 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 21:21:41.556684 kubelet[2593]: I0113 21:21:41.556667 2593 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:21:41.557572 kubelet[2593]: I0113 21:21:41.557543 2593 topology_manager.go:215] "Topology Admit Handler" podUID="563bc4a6bd7f44d5d28c174257e9f289" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:21:41.557644 kubelet[2593]: I0113 21:21:41.557628 2593 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:21:41.557731 kubelet[2593]: I0113 21:21:41.557665 2593 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:21:41.563893 kubelet[2593]: E0113 21:21:41.563836 2593 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 21:21:41.648526 kubelet[2593]: I0113 21:21:41.648485 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/563bc4a6bd7f44d5d28c174257e9f289-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"563bc4a6bd7f44d5d28c174257e9f289\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:41.648526 kubelet[2593]: I0113 21:21:41.648517 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:41.648526 kubelet[2593]: I0113 21:21:41.648539 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:41.648721 kubelet[2593]: I0113 21:21:41.648556 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:41.648721 kubelet[2593]: I0113 21:21:41.648570 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/563bc4a6bd7f44d5d28c174257e9f289-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"563bc4a6bd7f44d5d28c174257e9f289\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:41.648721 kubelet[2593]: I0113 21:21:41.648587 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/563bc4a6bd7f44d5d28c174257e9f289-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"563bc4a6bd7f44d5d28c174257e9f289\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:41.648721 kubelet[2593]: I0113 21:21:41.648601 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:41.648721 kubelet[2593]: I0113 21:21:41.648615 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:21:41.648833 kubelet[2593]: I0113 21:21:41.648629 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:21:41.865337 kubelet[2593]: E0113 21:21:41.865220 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:41.865337 kubelet[2593]: E0113 21:21:41.865220 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:41.865694 kubelet[2593]: E0113 21:21:41.865436 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:42.435498 kubelet[2593]: I0113 21:21:42.435427 2593 apiserver.go:52] "Watching apiserver" Jan 13 21:21:42.447677 kubelet[2593]: I0113 21:21:42.447632 2593 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:21:42.470374 kubelet[2593]: E0113 21:21:42.470281 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:42.470552 kubelet[2593]: E0113 21:21:42.470433 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:42.477120 kubelet[2593]: E0113 21:21:42.477075 2593 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:21:42.478715 kubelet[2593]: E0113 21:21:42.478680 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:42.524552 kubelet[2593]: I0113 21:21:42.524481 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.524458219 podStartE2EDuration="1.524458219s" podCreationTimestamp="2025-01-13 21:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:21:42.523085799 +0000 UTC m=+1.140616052" watchObservedRunningTime="2025-01-13 21:21:42.524458219 +0000 UTC m=+1.141988472" Jan 13 21:21:42.524754 kubelet[2593]: I0113 21:21:42.524624 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.524619116 podStartE2EDuration="2.524619116s" podCreationTimestamp="2025-01-13 21:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:21:42.503080331 +0000 UTC m=+1.120610584" watchObservedRunningTime="2025-01-13 21:21:42.524619116 +0000 UTC m=+1.142149370" Jan 13 21:21:42.552634 kubelet[2593]: I0113 21:21:42.552437 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.552419037 podStartE2EDuration="1.552419037s" podCreationTimestamp="2025-01-13 21:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:21:42.538946032 +0000 UTC m=+1.156476285" watchObservedRunningTime="2025-01-13 21:21:42.552419037 +0000 UTC m=+1.169949291" Jan 13 21:21:43.471494 kubelet[2593]: E0113 21:21:43.471450 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:43.490874 kubelet[2593]: E0113 21:21:43.490793 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:44.472068 kubelet[2593]: E0113 21:21:44.472044 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:44.797254 kubelet[2593]: E0113 21:21:44.797149 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:45.982978 sudo[1653]: pam_unix(sudo:session): session closed for user root Jan 13 21:21:45.984790 sshd[1650]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:45.989184 systemd[1]: sshd@8-10.0.0.66:22-10.0.0.1:38470.service: Deactivated successfully. Jan 13 21:21:45.991069 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:21:45.991260 systemd[1]: session-9.scope: Consumed 4.509s CPU time, 194.8M memory peak, 0B memory swap peak. Jan 13 21:21:45.991762 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:21:45.992647 systemd-logind[1443]: Removed session 9. Jan 13 21:21:50.740024 update_engine[1447]: I20250113 21:21:50.739902 1447 update_attempter.cc:509] Updating boot flags... Jan 13 21:21:50.770902 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2687) Jan 13 21:21:50.807901 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2690) Jan 13 21:21:50.848890 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2690) Jan 13 21:21:52.277745 kubelet[2593]: E0113 21:21:52.277695 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:52.483080 kubelet[2593]: E0113 21:21:52.483049 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:53.494251 kubelet[2593]: E0113 21:21:53.494212 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:54.801365 kubelet[2593]: E0113 21:21:54.801301 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:55.246746 kubelet[2593]: I0113 21:21:55.246192 2593 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:21:55.247123 containerd[1459]: time="2025-01-13T21:21:55.247081376Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:21:55.247493 kubelet[2593]: I0113 21:21:55.247413 2593 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:21:56.082063 kubelet[2593]: I0113 21:21:56.081754 2593 topology_manager.go:215] "Topology Admit Handler" podUID="a87db975-7ab9-4b22-85ee-af6f1bbcad75" podNamespace="kube-system" podName="kube-proxy-4bsmc" Jan 13 21:21:56.089625 systemd[1]: Created slice kubepods-besteffort-poda87db975_7ab9_4b22_85ee_af6f1bbcad75.slice - libcontainer container kubepods-besteffort-poda87db975_7ab9_4b22_85ee_af6f1bbcad75.slice. Jan 13 21:21:56.138127 kubelet[2593]: I0113 21:21:56.138072 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a87db975-7ab9-4b22-85ee-af6f1bbcad75-xtables-lock\") pod \"kube-proxy-4bsmc\" (UID: \"a87db975-7ab9-4b22-85ee-af6f1bbcad75\") " pod="kube-system/kube-proxy-4bsmc" Jan 13 21:21:56.138127 kubelet[2593]: I0113 21:21:56.138120 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r84z\" (UniqueName: \"kubernetes.io/projected/a87db975-7ab9-4b22-85ee-af6f1bbcad75-kube-api-access-5r84z\") pod \"kube-proxy-4bsmc\" (UID: \"a87db975-7ab9-4b22-85ee-af6f1bbcad75\") " pod="kube-system/kube-proxy-4bsmc" Jan 13 21:21:56.138400 kubelet[2593]: I0113 21:21:56.138151 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a87db975-7ab9-4b22-85ee-af6f1bbcad75-kube-proxy\") pod \"kube-proxy-4bsmc\" (UID: \"a87db975-7ab9-4b22-85ee-af6f1bbcad75\") " pod="kube-system/kube-proxy-4bsmc" Jan 13 21:21:56.138400 kubelet[2593]: I0113 21:21:56.138191 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a87db975-7ab9-4b22-85ee-af6f1bbcad75-lib-modules\") pod \"kube-proxy-4bsmc\" (UID: \"a87db975-7ab9-4b22-85ee-af6f1bbcad75\") " pod="kube-system/kube-proxy-4bsmc" Jan 13 21:21:56.243043 kubelet[2593]: E0113 21:21:56.243006 2593 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 21:21:56.243043 kubelet[2593]: E0113 21:21:56.243033 2593 projected.go:200] Error preparing data for projected volume kube-api-access-5r84z for pod kube-system/kube-proxy-4bsmc: configmap "kube-root-ca.crt" not found Jan 13 21:21:56.243214 kubelet[2593]: E0113 21:21:56.243088 2593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a87db975-7ab9-4b22-85ee-af6f1bbcad75-kube-api-access-5r84z podName:a87db975-7ab9-4b22-85ee-af6f1bbcad75 nodeName:}" failed. No retries permitted until 2025-01-13 21:21:56.743073064 +0000 UTC m=+15.360603317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5r84z" (UniqueName: "kubernetes.io/projected/a87db975-7ab9-4b22-85ee-af6f1bbcad75-kube-api-access-5r84z") pod "kube-proxy-4bsmc" (UID: "a87db975-7ab9-4b22-85ee-af6f1bbcad75") : configmap "kube-root-ca.crt" not found Jan 13 21:21:56.395401 kubelet[2593]: I0113 21:21:56.392955 2593 topology_manager.go:215] "Topology Admit Handler" podUID="182374e1-88ac-444d-a0c0-35ccb6c9d67a" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-7v89n" Jan 13 21:21:56.399788 systemd[1]: Created slice kubepods-besteffort-pod182374e1_88ac_444d_a0c0_35ccb6c9d67a.slice - libcontainer container kubepods-besteffort-pod182374e1_88ac_444d_a0c0_35ccb6c9d67a.slice. Jan 13 21:21:56.440759 kubelet[2593]: I0113 21:21:56.440704 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24vfr\" (UniqueName: \"kubernetes.io/projected/182374e1-88ac-444d-a0c0-35ccb6c9d67a-kube-api-access-24vfr\") pod \"tigera-operator-7bc55997bb-7v89n\" (UID: \"182374e1-88ac-444d-a0c0-35ccb6c9d67a\") " pod="tigera-operator/tigera-operator-7bc55997bb-7v89n" Jan 13 21:21:56.440933 kubelet[2593]: I0113 21:21:56.440795 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/182374e1-88ac-444d-a0c0-35ccb6c9d67a-var-lib-calico\") pod \"tigera-operator-7bc55997bb-7v89n\" (UID: \"182374e1-88ac-444d-a0c0-35ccb6c9d67a\") " pod="tigera-operator/tigera-operator-7bc55997bb-7v89n" Jan 13 21:21:56.703272 containerd[1459]: time="2025-01-13T21:21:56.703230448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-7v89n,Uid:182374e1-88ac-444d-a0c0-35ccb6c9d67a,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:21:56.727266 containerd[1459]: time="2025-01-13T21:21:56.727179470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:21:56.727266 containerd[1459]: time="2025-01-13T21:21:56.727229515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:21:56.727266 containerd[1459]: time="2025-01-13T21:21:56.727243872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:56.727432 containerd[1459]: time="2025-01-13T21:21:56.727344432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:56.751995 systemd[1]: Started cri-containerd-c51c16db79456144cf5b60de4d68195a07d86068a5e8e76cfa404828f4f9d49b.scope - libcontainer container c51c16db79456144cf5b60de4d68195a07d86068a5e8e76cfa404828f4f9d49b. Jan 13 21:21:56.783741 containerd[1459]: time="2025-01-13T21:21:56.783696679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-7v89n,Uid:182374e1-88ac-444d-a0c0-35ccb6c9d67a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c51c16db79456144cf5b60de4d68195a07d86068a5e8e76cfa404828f4f9d49b\"" Jan 13 21:21:56.785739 containerd[1459]: time="2025-01-13T21:21:56.785655219Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:21:56.998543 kubelet[2593]: E0113 21:21:56.998459 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:56.998832 containerd[1459]: time="2025-01-13T21:21:56.998804910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4bsmc,Uid:a87db975-7ab9-4b22-85ee-af6f1bbcad75,Namespace:kube-system,Attempt:0,}" Jan 13 21:21:57.023215 containerd[1459]: time="2025-01-13T21:21:57.022630211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:21:57.023215 containerd[1459]: time="2025-01-13T21:21:57.023195829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:21:57.023215 containerd[1459]: time="2025-01-13T21:21:57.023211899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:57.023419 containerd[1459]: time="2025-01-13T21:21:57.023308651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:21:57.044002 systemd[1]: Started cri-containerd-ed15f78139c6ef251162f28672bbc546faa35e7cbc3d04822c1130262d2b6586.scope - libcontainer container ed15f78139c6ef251162f28672bbc546faa35e7cbc3d04822c1130262d2b6586. Jan 13 21:21:57.065913 containerd[1459]: time="2025-01-13T21:21:57.065820306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4bsmc,Uid:a87db975-7ab9-4b22-85ee-af6f1bbcad75,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed15f78139c6ef251162f28672bbc546faa35e7cbc3d04822c1130262d2b6586\"" Jan 13 21:21:57.066524 kubelet[2593]: E0113 21:21:57.066502 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:57.068272 containerd[1459]: time="2025-01-13T21:21:57.068227010Z" level=info msg="CreateContainer within sandbox \"ed15f78139c6ef251162f28672bbc546faa35e7cbc3d04822c1130262d2b6586\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:21:57.084560 containerd[1459]: time="2025-01-13T21:21:57.084528752Z" level=info msg="CreateContainer within sandbox \"ed15f78139c6ef251162f28672bbc546faa35e7cbc3d04822c1130262d2b6586\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5d019347590b1ffce01d0228835491175ae5d2bc65bb3cbeecd9c33c052bd11a\"" Jan 13 21:21:57.085056 containerd[1459]: time="2025-01-13T21:21:57.085003669Z" level=info msg="StartContainer for \"5d019347590b1ffce01d0228835491175ae5d2bc65bb3cbeecd9c33c052bd11a\"" Jan 13 21:21:57.112998 systemd[1]: Started cri-containerd-5d019347590b1ffce01d0228835491175ae5d2bc65bb3cbeecd9c33c052bd11a.scope - libcontainer container 5d019347590b1ffce01d0228835491175ae5d2bc65bb3cbeecd9c33c052bd11a. Jan 13 21:21:57.146534 containerd[1459]: time="2025-01-13T21:21:57.146491286Z" level=info msg="StartContainer for \"5d019347590b1ffce01d0228835491175ae5d2bc65bb3cbeecd9c33c052bd11a\" returns successfully" Jan 13 21:21:57.492621 kubelet[2593]: E0113 21:21:57.492390 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:21:57.500980 kubelet[2593]: I0113 21:21:57.500907 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4bsmc" podStartSLOduration=1.5008850809999998 podStartE2EDuration="1.500885081s" podCreationTimestamp="2025-01-13 21:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:21:57.500511044 +0000 UTC m=+16.118041297" watchObservedRunningTime="2025-01-13 21:21:57.500885081 +0000 UTC m=+16.118415354" Jan 13 21:22:01.027305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424348056.mount: Deactivated successfully. Jan 13 21:22:01.387493 containerd[1459]: time="2025-01-13T21:22:01.387390812Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:01.388208 containerd[1459]: time="2025-01-13T21:22:01.388150805Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763733" Jan 13 21:22:01.389345 containerd[1459]: time="2025-01-13T21:22:01.389316754Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:01.391443 containerd[1459]: time="2025-01-13T21:22:01.391408598Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:01.392126 containerd[1459]: time="2025-01-13T21:22:01.392087968Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.606356244s" Jan 13 21:22:01.392179 containerd[1459]: time="2025-01-13T21:22:01.392126451Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 21:22:01.396239 containerd[1459]: time="2025-01-13T21:22:01.396218245Z" level=info msg="CreateContainer within sandbox \"c51c16db79456144cf5b60de4d68195a07d86068a5e8e76cfa404828f4f9d49b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:22:01.406699 containerd[1459]: time="2025-01-13T21:22:01.406663137Z" level=info msg="CreateContainer within sandbox \"c51c16db79456144cf5b60de4d68195a07d86068a5e8e76cfa404828f4f9d49b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"56a58cd19517663cab1f8f197dc7819db2b769dd4fd83a79f45f99ffafdbe3ac\"" Jan 13 21:22:01.407052 containerd[1459]: time="2025-01-13T21:22:01.407013849Z" level=info msg="StartContainer for \"56a58cd19517663cab1f8f197dc7819db2b769dd4fd83a79f45f99ffafdbe3ac\"" Jan 13 21:22:01.440057 systemd[1]: Started cri-containerd-56a58cd19517663cab1f8f197dc7819db2b769dd4fd83a79f45f99ffafdbe3ac.scope - libcontainer container 56a58cd19517663cab1f8f197dc7819db2b769dd4fd83a79f45f99ffafdbe3ac. Jan 13 21:22:01.466941 containerd[1459]: time="2025-01-13T21:22:01.466892290Z" level=info msg="StartContainer for \"56a58cd19517663cab1f8f197dc7819db2b769dd4fd83a79f45f99ffafdbe3ac\" returns successfully" Jan 13 21:22:01.507676 kubelet[2593]: I0113 21:22:01.507618 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-7v89n" podStartSLOduration=0.897785242 podStartE2EDuration="5.507603023s" podCreationTimestamp="2025-01-13 21:21:56 +0000 UTC" firstStartedPulling="2025-01-13 21:21:56.785325297 +0000 UTC m=+15.402855550" lastFinishedPulling="2025-01-13 21:22:01.395143078 +0000 UTC m=+20.012673331" observedRunningTime="2025-01-13 21:22:01.507290114 +0000 UTC m=+20.124820367" watchObservedRunningTime="2025-01-13 21:22:01.507603023 +0000 UTC m=+20.125133276" Jan 13 21:22:04.239813 kubelet[2593]: I0113 21:22:04.239766 2593 topology_manager.go:215] "Topology Admit Handler" podUID="bd72e81d-161b-4634-a3a4-c0476f0db049" podNamespace="calico-system" podName="calico-typha-95bb458d4-klq6x" Jan 13 21:22:04.251796 systemd[1]: Created slice kubepods-besteffort-podbd72e81d_161b_4634_a3a4_c0476f0db049.slice - libcontainer container kubepods-besteffort-podbd72e81d_161b_4634_a3a4_c0476f0db049.slice. Jan 13 21:22:04.259223 kubelet[2593]: I0113 21:22:04.259188 2593 topology_manager.go:215] "Topology Admit Handler" podUID="80493db1-e55f-4b32-a211-bddac50f2d60" podNamespace="calico-system" podName="calico-node-zx4mv" Jan 13 21:22:04.271639 systemd[1]: Created slice kubepods-besteffort-pod80493db1_e55f_4b32_a211_bddac50f2d60.slice - libcontainer container kubepods-besteffort-pod80493db1_e55f_4b32_a211_bddac50f2d60.slice. Jan 13 21:22:04.290703 kubelet[2593]: I0113 21:22:04.290633 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/80493db1-e55f-4b32-a211-bddac50f2d60-var-run-calico\") pod \"calico-node-zx4mv\" (UID: \"80493db1-e55f-4b32-a211-bddac50f2d60\") " pod="calico-system/calico-node-zx4mv" Jan 13 21:22:04.290842 kubelet[2593]: I0113 21:22:04.290737 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/80493db1-e55f-4b32-a211-bddac50f2d60-cni-log-dir\") pod \"calico-node-zx4mv\" (UID: \"80493db1-e55f-4b32-a211-bddac50f2d60\") " pod="calico-system/calico-node-zx4mv" Jan 13 21:22:04.290842 kubelet[2593]: I0113 21:22:04.290760 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/80493db1-e55f-4b32-a211-bddac50f2d60-cni-bin-dir\") pod \"calico-node-zx4mv\" (UID: \"80493db1-e55f-4b32-a211-bddac50f2d60\") " pod="calico-system/calico-node-zx4mv" Jan 13 21:22:04.290842 kubelet[2593]: I0113 21:22:04.290778 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80493db1-e55f-4b32-a211-bddac50f2d60-tigera-ca-bundle\") pod \"calico-node-zx4mv\" (UID: \"80493db1-e55f-4b32-a211-bddac50f2d60\") " pod="calico-system/calico-node-zx4mv" Jan 13 21:22:04.290842 kubelet[2593]: I0113 21:22:04.290799 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bd72e81d-161b-4634-a3a4-c0476f0db049-typha-certs\") pod \"calico-typha-95bb458d4-klq6x\" (UID: \"bd72e81d-161b-4634-a3a4-c0476f0db049\") " pod="calico-system/calico-typha-95bb458d4-klq6x" Jan 13 21:22:04.290842 kubelet[2593]: I0113 21:22:04.290815 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/80493db1-e55f-4b32-a211-bddac50f2d60-policysync\") pod \"calico-node-zx4mv\" (UID: \"80493db1-e55f-4b32-a211-bddac50f2d60\") " pod="calico-system/calico-node-zx4mv" Jan 13 21:22:04.290972 kubelet[2593]: I0113 21:22:04.290830 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80493db1-e55f-4b32-a211-bddac50f2d60-xtables-lock\") pod \"calico-node-zx4mv\" (UID: \"80493db1-e55f-4b32-a211-bddac50f2d60\") " pod="calico-system/calico-node-zx4mv" Jan 13 21:22:04.290972 kubelet[2593]: I0113 21:22:04.290845 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/80493db1-e55f-4b32-a211-bddac50f2d60-flexvol-driver-host\") pod \"calico-node-zx4mv\" (UID: \"80493db1-e55f-4b32-a211-bddac50f2d60\") " pod="calico-system/calico-node-zx4mv" Jan 13 21:22:04.290972 kubelet[2593]: I0113 21:22:04.290876 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd72e81d-161b-4634-a3a4-c0476f0db049-tigera-ca-bundle\") pod \"calico-typha-95bb458d4-klq6x\" (UID: \"bd72e81d-161b-4634-a3a4-c0476f0db049\") " pod="calico-system/calico-typha-95bb458d4-klq6x" Jan 13 21:22:04.290972 kubelet[2593]: I0113 21:22:04.290892 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d9vl\" (UniqueName: \"kubernetes.io/projected/bd72e81d-161b-4634-a3a4-c0476f0db049-kube-api-access-9d9vl\") pod \"calico-typha-95bb458d4-klq6x\" (UID: \"bd72e81d-161b-4634-a3a4-c0476f0db049\") " pod="calico-system/calico-typha-95bb458d4-klq6x" Jan 13 21:22:04.290972 kubelet[2593]: I0113 21:22:04.290907 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/80493db1-e55f-4b32-a211-bddac50f2d60-cni-net-dir\") pod \"calico-node-zx4mv\" (UID: \"80493db1-e55f-4b32-a211-bddac50f2d60\") " pod="calico-system/calico-node-zx4mv" Jan 13 21:22:04.291092 kubelet[2593]: I0113 21:22:04.290921 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80493db1-e55f-4b32-a211-bddac50f2d60-lib-modules\") pod \"calico-node-zx4mv\" (UID: \"80493db1-e55f-4b32-a211-bddac50f2d60\") " pod="calico-system/calico-node-zx4mv" Jan 13 21:22:04.291092 kubelet[2593]: I0113 21:22:04.290936 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/80493db1-e55f-4b32-a211-bddac50f2d60-node-certs\") pod \"calico-node-zx4mv\" (UID: \"80493db1-e55f-4b32-a211-bddac50f2d60\") " pod="calico-system/calico-node-zx4mv" Jan 13 21:22:04.291092 kubelet[2593]: I0113 21:22:04.290951 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/80493db1-e55f-4b32-a211-bddac50f2d60-var-lib-calico\") pod \"calico-node-zx4mv\" (UID: \"80493db1-e55f-4b32-a211-bddac50f2d60\") " pod="calico-system/calico-node-zx4mv" Jan 13 21:22:04.291092 kubelet[2593]: I0113 21:22:04.290974 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ln96\" (UniqueName: \"kubernetes.io/projected/80493db1-e55f-4b32-a211-bddac50f2d60-kube-api-access-7ln96\") pod \"calico-node-zx4mv\" (UID: \"80493db1-e55f-4b32-a211-bddac50f2d60\") " pod="calico-system/calico-node-zx4mv" Jan 13 21:22:04.374619 kubelet[2593]: I0113 21:22:04.374568 2593 topology_manager.go:215] "Topology Admit Handler" podUID="2faded03-4e90-4e3a-85c7-86d52abea6de" podNamespace="calico-system" podName="csi-node-driver-d4pmk" Jan 13 21:22:04.374903 kubelet[2593]: E0113 21:22:04.374849 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d4pmk" podUID="2faded03-4e90-4e3a-85c7-86d52abea6de" Jan 13 21:22:04.391805 kubelet[2593]: I0113 21:22:04.391752 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hn82\" (UniqueName: \"kubernetes.io/projected/2faded03-4e90-4e3a-85c7-86d52abea6de-kube-api-access-4hn82\") pod \"csi-node-driver-d4pmk\" (UID: \"2faded03-4e90-4e3a-85c7-86d52abea6de\") " pod="calico-system/csi-node-driver-d4pmk" Jan 13 21:22:04.391957 kubelet[2593]: I0113 21:22:04.391829 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2faded03-4e90-4e3a-85c7-86d52abea6de-kubelet-dir\") pod \"csi-node-driver-d4pmk\" (UID: \"2faded03-4e90-4e3a-85c7-86d52abea6de\") " pod="calico-system/csi-node-driver-d4pmk" Jan 13 21:22:04.391957 kubelet[2593]: I0113 21:22:04.391844 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2faded03-4e90-4e3a-85c7-86d52abea6de-socket-dir\") pod \"csi-node-driver-d4pmk\" (UID: \"2faded03-4e90-4e3a-85c7-86d52abea6de\") " pod="calico-system/csi-node-driver-d4pmk" Jan 13 21:22:04.391957 kubelet[2593]: I0113 21:22:04.391894 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2faded03-4e90-4e3a-85c7-86d52abea6de-varrun\") pod \"csi-node-driver-d4pmk\" (UID: \"2faded03-4e90-4e3a-85c7-86d52abea6de\") " pod="calico-system/csi-node-driver-d4pmk" Jan 13 21:22:04.391957 kubelet[2593]: I0113 21:22:04.391947 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2faded03-4e90-4e3a-85c7-86d52abea6de-registration-dir\") pod \"csi-node-driver-d4pmk\" (UID: \"2faded03-4e90-4e3a-85c7-86d52abea6de\") " pod="calico-system/csi-node-driver-d4pmk" Jan 13 21:22:04.405373 kubelet[2593]: E0113 21:22:04.404594 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.405373 kubelet[2593]: W0113 21:22:04.404615 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.405373 kubelet[2593]: E0113 21:22:04.404632 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.405373 kubelet[2593]: E0113 21:22:04.405220 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.405373 kubelet[2593]: W0113 21:22:04.405229 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.405373 kubelet[2593]: E0113 21:22:04.405300 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.406180 kubelet[2593]: E0113 21:22:04.406154 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.406180 kubelet[2593]: W0113 21:22:04.406170 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.406744 kubelet[2593]: E0113 21:22:04.406715 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.407291 kubelet[2593]: E0113 21:22:04.407269 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.407291 kubelet[2593]: W0113 21:22:04.407284 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.408178 kubelet[2593]: E0113 21:22:04.407638 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.411043 kubelet[2593]: E0113 21:22:04.411015 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.411043 kubelet[2593]: W0113 21:22:04.411031 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.411640 kubelet[2593]: E0113 21:22:04.411610 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.411640 kubelet[2593]: W0113 21:22:04.411627 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.411640 kubelet[2593]: E0113 21:22:04.411639 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.411851 kubelet[2593]: E0113 21:22:04.411815 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.413868 kubelet[2593]: E0113 21:22:04.413835 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.413868 kubelet[2593]: W0113 21:22:04.413898 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.413868 kubelet[2593]: E0113 21:22:04.413959 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.414397 kubelet[2593]: E0113 21:22:04.414368 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.414435 kubelet[2593]: W0113 21:22:04.414406 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.414435 kubelet[2593]: E0113 21:22:04.414418 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.417978 kubelet[2593]: E0113 21:22:04.417876 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.417978 kubelet[2593]: W0113 21:22:04.417890 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.417978 kubelet[2593]: E0113 21:22:04.417900 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.493336 kubelet[2593]: E0113 21:22:04.493215 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.493336 kubelet[2593]: W0113 21:22:04.493242 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.493336 kubelet[2593]: E0113 21:22:04.493268 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.493837 kubelet[2593]: E0113 21:22:04.493631 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.493837 kubelet[2593]: W0113 21:22:04.493653 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.493837 kubelet[2593]: E0113 21:22:04.493679 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.494101 kubelet[2593]: E0113 21:22:04.494043 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.494230 kubelet[2593]: W0113 21:22:04.494204 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.494440 kubelet[2593]: E0113 21:22:04.494358 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.494803 kubelet[2593]: E0113 21:22:04.494776 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.494803 kubelet[2593]: W0113 21:22:04.494792 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.495014 kubelet[2593]: E0113 21:22:04.494831 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.495248 kubelet[2593]: E0113 21:22:04.495231 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.495248 kubelet[2593]: W0113 21:22:04.495246 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.495325 kubelet[2593]: E0113 21:22:04.495260 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.495595 kubelet[2593]: E0113 21:22:04.495561 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.495595 kubelet[2593]: W0113 21:22:04.495589 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.495696 kubelet[2593]: E0113 21:22:04.495675 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.495901 kubelet[2593]: E0113 21:22:04.495883 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.495901 kubelet[2593]: W0113 21:22:04.495896 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.495991 kubelet[2593]: E0113 21:22:04.495929 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.496115 kubelet[2593]: E0113 21:22:04.496093 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.496115 kubelet[2593]: W0113 21:22:04.496102 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.496170 kubelet[2593]: E0113 21:22:04.496126 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.496300 kubelet[2593]: E0113 21:22:04.496284 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.496300 kubelet[2593]: W0113 21:22:04.496295 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.496389 kubelet[2593]: E0113 21:22:04.496330 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.496502 kubelet[2593]: E0113 21:22:04.496482 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.496502 kubelet[2593]: W0113 21:22:04.496494 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.496598 kubelet[2593]: E0113 21:22:04.496582 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.496819 kubelet[2593]: E0113 21:22:04.496795 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.496819 kubelet[2593]: W0113 21:22:04.496808 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.496998 kubelet[2593]: E0113 21:22:04.496834 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.497084 kubelet[2593]: E0113 21:22:04.497068 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.497084 kubelet[2593]: W0113 21:22:04.497082 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.497136 kubelet[2593]: E0113 21:22:04.497094 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.497325 kubelet[2593]: E0113 21:22:04.497306 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.497325 kubelet[2593]: W0113 21:22:04.497320 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.497404 kubelet[2593]: E0113 21:22:04.497335 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.498080 kubelet[2593]: E0113 21:22:04.498047 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.498080 kubelet[2593]: W0113 21:22:04.498063 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.498198 kubelet[2593]: E0113 21:22:04.498171 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.499241 kubelet[2593]: E0113 21:22:04.498378 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.499241 kubelet[2593]: W0113 21:22:04.498401 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.499241 kubelet[2593]: E0113 21:22:04.498436 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.499241 kubelet[2593]: E0113 21:22:04.498682 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.499241 kubelet[2593]: W0113 21:22:04.498691 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.499241 kubelet[2593]: E0113 21:22:04.498769 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.499241 kubelet[2593]: E0113 21:22:04.499065 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.499241 kubelet[2593]: W0113 21:22:04.499073 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.499241 kubelet[2593]: E0113 21:22:04.499116 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.499486 kubelet[2593]: E0113 21:22:04.499320 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.499486 kubelet[2593]: W0113 21:22:04.499328 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.499486 kubelet[2593]: E0113 21:22:04.499354 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.499615 kubelet[2593]: E0113 21:22:04.499595 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.499670 kubelet[2593]: W0113 21:22:04.499631 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.499670 kubelet[2593]: E0113 21:22:04.499648 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.499979 kubelet[2593]: E0113 21:22:04.499960 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.499979 kubelet[2593]: W0113 21:22:04.499974 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.499979 kubelet[2593]: E0113 21:22:04.499990 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.500308 kubelet[2593]: E0113 21:22:04.500291 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.500308 kubelet[2593]: W0113 21:22:04.500304 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.500373 kubelet[2593]: E0113 21:22:04.500320 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.500521 kubelet[2593]: E0113 21:22:04.500507 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.500551 kubelet[2593]: W0113 21:22:04.500528 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.500551 kubelet[2593]: E0113 21:22:04.500542 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.501353 kubelet[2593]: E0113 21:22:04.501334 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.501353 kubelet[2593]: W0113 21:22:04.501348 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.501499 kubelet[2593]: E0113 21:22:04.501464 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.501619 kubelet[2593]: E0113 21:22:04.501600 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.501619 kubelet[2593]: W0113 21:22:04.501615 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.501709 kubelet[2593]: E0113 21:22:04.501659 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.503005 kubelet[2593]: E0113 21:22:04.502976 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.503005 kubelet[2593]: W0113 21:22:04.502988 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.503005 kubelet[2593]: E0113 21:22:04.502998 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.506410 kubelet[2593]: E0113 21:22:04.506210 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:22:04.506410 kubelet[2593]: W0113 21:22:04.506230 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:22:04.506410 kubelet[2593]: E0113 21:22:04.506243 2593 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:22:04.558282 kubelet[2593]: E0113 21:22:04.558249 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:04.559106 containerd[1459]: time="2025-01-13T21:22:04.558740375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-95bb458d4-klq6x,Uid:bd72e81d-161b-4634-a3a4-c0476f0db049,Namespace:calico-system,Attempt:0,}" Jan 13 21:22:04.577251 kubelet[2593]: E0113 21:22:04.577215 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:04.578016 containerd[1459]: time="2025-01-13T21:22:04.577984413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zx4mv,Uid:80493db1-e55f-4b32-a211-bddac50f2d60,Namespace:calico-system,Attempt:0,}" Jan 13 21:22:04.583936 containerd[1459]: time="2025-01-13T21:22:04.583598628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:04.583936 containerd[1459]: time="2025-01-13T21:22:04.583653240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:04.583936 containerd[1459]: time="2025-01-13T21:22:04.583697114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:04.583936 containerd[1459]: time="2025-01-13T21:22:04.583895366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:04.603965 containerd[1459]: time="2025-01-13T21:22:04.602772584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:04.603965 containerd[1459]: time="2025-01-13T21:22:04.603369758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:04.603965 containerd[1459]: time="2025-01-13T21:22:04.603381200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:04.603965 containerd[1459]: time="2025-01-13T21:22:04.603456943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:04.607078 systemd[1]: Started cri-containerd-ba3718d143a2a3004f51f9eebebeb458b71d45fe8da708f6829858da784c2dc1.scope - libcontainer container ba3718d143a2a3004f51f9eebebeb458b71d45fe8da708f6829858da784c2dc1. Jan 13 21:22:04.622015 systemd[1]: Started cri-containerd-1223c0766ef6339cecd60bee6dc38efd9a78c9856ef589b95163f93f35b7f4b2.scope - libcontainer container 1223c0766ef6339cecd60bee6dc38efd9a78c9856ef589b95163f93f35b7f4b2. Jan 13 21:22:04.647111 containerd[1459]: time="2025-01-13T21:22:04.646263512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zx4mv,Uid:80493db1-e55f-4b32-a211-bddac50f2d60,Namespace:calico-system,Attempt:0,} returns sandbox id \"1223c0766ef6339cecd60bee6dc38efd9a78c9856ef589b95163f93f35b7f4b2\"" Jan 13 21:22:04.648717 kubelet[2593]: E0113 21:22:04.648484 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:04.656518 containerd[1459]: time="2025-01-13T21:22:04.656247809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:22:04.658652 containerd[1459]: time="2025-01-13T21:22:04.658587205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-95bb458d4-klq6x,Uid:bd72e81d-161b-4634-a3a4-c0476f0db049,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba3718d143a2a3004f51f9eebebeb458b71d45fe8da708f6829858da784c2dc1\"" Jan 13 21:22:04.659191 kubelet[2593]: E0113 21:22:04.659089 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:06.369364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348147036.mount: Deactivated successfully. Jan 13 21:22:06.461791 kubelet[2593]: E0113 21:22:06.461737 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d4pmk" podUID="2faded03-4e90-4e3a-85c7-86d52abea6de" Jan 13 21:22:06.487138 containerd[1459]: time="2025-01-13T21:22:06.487071791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:06.487775 containerd[1459]: time="2025-01-13T21:22:06.487735200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 21:22:06.489056 containerd[1459]: time="2025-01-13T21:22:06.489022945Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:06.491218 containerd[1459]: time="2025-01-13T21:22:06.491184494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:06.491830 containerd[1459]: time="2025-01-13T21:22:06.491787920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.835501829s" Jan 13 21:22:06.491901 containerd[1459]: time="2025-01-13T21:22:06.491829879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:22:06.492620 containerd[1459]: time="2025-01-13T21:22:06.492597454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:22:06.504465 containerd[1459]: time="2025-01-13T21:22:06.504434959Z" level=info msg="CreateContainer within sandbox \"1223c0766ef6339cecd60bee6dc38efd9a78c9856ef589b95163f93f35b7f4b2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:22:06.518539 containerd[1459]: time="2025-01-13T21:22:06.518508514Z" level=info msg="CreateContainer within sandbox \"1223c0766ef6339cecd60bee6dc38efd9a78c9856ef589b95163f93f35b7f4b2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ed67f758a50fe66f6bf948da95b00a77c23d944cc04ad0342c10c2bc0844e459\"" Jan 13 21:22:06.521957 containerd[1459]: time="2025-01-13T21:22:06.521844684Z" level=info msg="StartContainer for \"ed67f758a50fe66f6bf948da95b00a77c23d944cc04ad0342c10c2bc0844e459\"" Jan 13 21:22:06.553977 systemd[1]: Started cri-containerd-ed67f758a50fe66f6bf948da95b00a77c23d944cc04ad0342c10c2bc0844e459.scope - libcontainer container ed67f758a50fe66f6bf948da95b00a77c23d944cc04ad0342c10c2bc0844e459. Jan 13 21:22:06.581830 containerd[1459]: time="2025-01-13T21:22:06.581788735Z" level=info msg="StartContainer for \"ed67f758a50fe66f6bf948da95b00a77c23d944cc04ad0342c10c2bc0844e459\" returns successfully" Jan 13 21:22:06.594499 systemd[1]: cri-containerd-ed67f758a50fe66f6bf948da95b00a77c23d944cc04ad0342c10c2bc0844e459.scope: Deactivated successfully. Jan 13 21:22:06.669155 containerd[1459]: time="2025-01-13T21:22:06.668980049Z" level=info msg="shim disconnected" id=ed67f758a50fe66f6bf948da95b00a77c23d944cc04ad0342c10c2bc0844e459 namespace=k8s.io Jan 13 21:22:06.669155 containerd[1459]: time="2025-01-13T21:22:06.669058377Z" level=warning msg="cleaning up after shim disconnected" id=ed67f758a50fe66f6bf948da95b00a77c23d944cc04ad0342c10c2bc0844e459 namespace=k8s.io Jan 13 21:22:06.669155 containerd[1459]: time="2025-01-13T21:22:06.669067464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:22:07.510935 kubelet[2593]: E0113 21:22:07.510901 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:07.515700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed67f758a50fe66f6bf948da95b00a77c23d944cc04ad0342c10c2bc0844e459-rootfs.mount: Deactivated successfully. Jan 13 21:22:08.460641 kubelet[2593]: E0113 21:22:08.460595 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d4pmk" podUID="2faded03-4e90-4e3a-85c7-86d52abea6de" Jan 13 21:22:09.396682 containerd[1459]: time="2025-01-13T21:22:09.396624559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:09.397351 containerd[1459]: time="2025-01-13T21:22:09.397286524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 13 21:22:09.398596 containerd[1459]: time="2025-01-13T21:22:09.398569938Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:09.400671 containerd[1459]: time="2025-01-13T21:22:09.400625436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:09.401252 containerd[1459]: time="2025-01-13T21:22:09.401218241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.90857487s" Jan 13 21:22:09.401288 containerd[1459]: time="2025-01-13T21:22:09.401252605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 21:22:09.402841 containerd[1459]: time="2025-01-13T21:22:09.402681965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:22:09.408697 containerd[1459]: time="2025-01-13T21:22:09.408663278Z" level=info msg="CreateContainer within sandbox \"ba3718d143a2a3004f51f9eebebeb458b71d45fe8da708f6829858da784c2dc1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:22:09.425274 containerd[1459]: time="2025-01-13T21:22:09.425233982Z" level=info msg="CreateContainer within sandbox \"ba3718d143a2a3004f51f9eebebeb458b71d45fe8da708f6829858da784c2dc1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"20d42338f7a82dbd8f96f9e5293fa55b589b0a28051ba79097717ed99b7d19db\"" Jan 13 21:22:09.425737 containerd[1459]: time="2025-01-13T21:22:09.425702683Z" level=info msg="StartContainer for \"20d42338f7a82dbd8f96f9e5293fa55b589b0a28051ba79097717ed99b7d19db\"" Jan 13 21:22:09.454976 systemd[1]: Started cri-containerd-20d42338f7a82dbd8f96f9e5293fa55b589b0a28051ba79097717ed99b7d19db.scope - libcontainer container 20d42338f7a82dbd8f96f9e5293fa55b589b0a28051ba79097717ed99b7d19db. Jan 13 21:22:09.492068 containerd[1459]: time="2025-01-13T21:22:09.492015809Z" level=info msg="StartContainer for \"20d42338f7a82dbd8f96f9e5293fa55b589b0a28051ba79097717ed99b7d19db\" returns successfully" Jan 13 21:22:09.517700 kubelet[2593]: E0113 21:22:09.517648 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:10.458564 kubelet[2593]: E0113 21:22:10.458490 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d4pmk" podUID="2faded03-4e90-4e3a-85c7-86d52abea6de" Jan 13 21:22:10.519351 kubelet[2593]: I0113 21:22:10.519313 2593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:22:10.519907 kubelet[2593]: E0113 21:22:10.519888 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:11.499345 systemd[1]: Started sshd@9-10.0.0.66:22-10.0.0.1:46916.service - OpenSSH per-connection server daemon (10.0.0.1:46916). Jan 13 21:22:11.527801 sshd[3238]: Accepted publickey for core from 10.0.0.1 port 46916 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:11.529452 sshd[3238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:11.533567 systemd-logind[1443]: New session 10 of user core. Jan 13 21:22:11.540127 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:22:11.659378 sshd[3238]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:11.664335 systemd[1]: sshd@9-10.0.0.66:22-10.0.0.1:46916.service: Deactivated successfully. Jan 13 21:22:11.666583 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:22:11.667184 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:22:11.668132 systemd-logind[1443]: Removed session 10. Jan 13 21:22:12.458142 kubelet[2593]: E0113 21:22:12.458082 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d4pmk" podUID="2faded03-4e90-4e3a-85c7-86d52abea6de" Jan 13 21:22:14.458758 kubelet[2593]: E0113 21:22:14.458568 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d4pmk" podUID="2faded03-4e90-4e3a-85c7-86d52abea6de" Jan 13 21:22:15.294323 containerd[1459]: time="2025-01-13T21:22:15.294278772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:15.295056 containerd[1459]: time="2025-01-13T21:22:15.295007091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:22:15.296193 containerd[1459]: time="2025-01-13T21:22:15.296115073Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:15.298208 containerd[1459]: time="2025-01-13T21:22:15.298179734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:15.298740 containerd[1459]: time="2025-01-13T21:22:15.298709800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.896000133s" Jan 13 21:22:15.298785 containerd[1459]: time="2025-01-13T21:22:15.298738744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:22:15.300583 containerd[1459]: time="2025-01-13T21:22:15.300555378Z" level=info msg="CreateContainer within sandbox \"1223c0766ef6339cecd60bee6dc38efd9a78c9856ef589b95163f93f35b7f4b2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:22:15.312544 containerd[1459]: time="2025-01-13T21:22:15.312509846Z" level=info msg="CreateContainer within sandbox \"1223c0766ef6339cecd60bee6dc38efd9a78c9856ef589b95163f93f35b7f4b2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"283ff4e5cc127a52104653b846aca3b7dc27a0081fcc4e78e8430e08c51257a2\"" Jan 13 21:22:15.313172 containerd[1459]: time="2025-01-13T21:22:15.312958099Z" level=info msg="StartContainer for \"283ff4e5cc127a52104653b846aca3b7dc27a0081fcc4e78e8430e08c51257a2\"" Jan 13 21:22:15.340988 systemd[1]: Started cri-containerd-283ff4e5cc127a52104653b846aca3b7dc27a0081fcc4e78e8430e08c51257a2.scope - libcontainer container 283ff4e5cc127a52104653b846aca3b7dc27a0081fcc4e78e8430e08c51257a2. Jan 13 21:22:15.368534 containerd[1459]: time="2025-01-13T21:22:15.368416997Z" level=info msg="StartContainer for \"283ff4e5cc127a52104653b846aca3b7dc27a0081fcc4e78e8430e08c51257a2\" returns successfully" Jan 13 21:22:15.528569 kubelet[2593]: E0113 21:22:15.528525 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:15.538536 kubelet[2593]: I0113 21:22:15.538461 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-95bb458d4-klq6x" podStartSLOduration=6.796190804 podStartE2EDuration="11.538442003s" podCreationTimestamp="2025-01-13 21:22:04 +0000 UTC" firstStartedPulling="2025-01-13 21:22:04.659716321 +0000 UTC m=+23.277246575" lastFinishedPulling="2025-01-13 21:22:09.401967521 +0000 UTC m=+28.019497774" observedRunningTime="2025-01-13 21:22:09.527265917 +0000 UTC m=+28.144796170" watchObservedRunningTime="2025-01-13 21:22:15.538442003 +0000 UTC m=+34.155972256" Jan 13 21:22:16.458310 kubelet[2593]: E0113 21:22:16.458254 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d4pmk" podUID="2faded03-4e90-4e3a-85c7-86d52abea6de" Jan 13 21:22:16.519196 systemd[1]: cri-containerd-283ff4e5cc127a52104653b846aca3b7dc27a0081fcc4e78e8430e08c51257a2.scope: Deactivated successfully. Jan 13 21:22:16.525577 kubelet[2593]: I0113 21:22:16.525548 2593 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:22:16.530316 kubelet[2593]: E0113 21:22:16.530286 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:16.542050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-283ff4e5cc127a52104653b846aca3b7dc27a0081fcc4e78e8430e08c51257a2-rootfs.mount: Deactivated successfully. Jan 13 21:22:16.548425 kubelet[2593]: I0113 21:22:16.548068 2593 topology_manager.go:215] "Topology Admit Handler" podUID="95694724-301a-4ddf-b650-581e197892dc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wv2tn" Jan 13 21:22:16.556309 kubelet[2593]: I0113 21:22:16.548604 2593 topology_manager.go:215] "Topology Admit Handler" podUID="b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zkpxx" Jan 13 21:22:16.556309 kubelet[2593]: I0113 21:22:16.548710 2593 topology_manager.go:215] "Topology Admit Handler" podUID="10badecf-9cf2-455b-8b0e-b7541f200545" podNamespace="calico-system" podName="calico-kube-controllers-6496d4bbf-28tcw" Jan 13 21:22:16.556309 kubelet[2593]: I0113 21:22:16.550531 2593 topology_manager.go:215] "Topology Admit Handler" podUID="4b08e0a5-05ea-4ffe-b58b-456364b2d1ae" podNamespace="calico-apiserver" podName="calico-apiserver-64584b8b84-2rfb7" Jan 13 21:22:16.556309 kubelet[2593]: I0113 21:22:16.550892 2593 topology_manager.go:215] "Topology Admit Handler" podUID="c65feba5-e029-4c69-b7ee-c32a5deacfc3" podNamespace="calico-apiserver" podName="calico-apiserver-64584b8b84-x9bt6" Jan 13 21:22:16.555328 systemd[1]: Created slice kubepods-burstable-podb8ee4f35_b1c3_4ecd_b94b_0ab75b2b3ba7.slice - libcontainer container kubepods-burstable-podb8ee4f35_b1c3_4ecd_b94b_0ab75b2b3ba7.slice. Jan 13 21:22:16.561751 systemd[1]: Created slice kubepods-burstable-pod95694724_301a_4ddf_b650_581e197892dc.slice - libcontainer container kubepods-burstable-pod95694724_301a_4ddf_b650_581e197892dc.slice. Jan 13 21:22:16.567095 systemd[1]: Created slice kubepods-besteffort-pod10badecf_9cf2_455b_8b0e_b7541f200545.slice - libcontainer container kubepods-besteffort-pod10badecf_9cf2_455b_8b0e_b7541f200545.slice. Jan 13 21:22:16.572146 systemd[1]: Created slice kubepods-besteffort-podc65feba5_e029_4c69_b7ee_c32a5deacfc3.slice - libcontainer container kubepods-besteffort-podc65feba5_e029_4c69_b7ee_c32a5deacfc3.slice. Jan 13 21:22:16.576569 systemd[1]: Created slice kubepods-besteffort-pod4b08e0a5_05ea_4ffe_b58b_456364b2d1ae.slice - libcontainer container kubepods-besteffort-pod4b08e0a5_05ea_4ffe_b58b_456364b2d1ae.slice. Jan 13 21:22:16.582966 kubelet[2593]: I0113 21:22:16.582626 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h56l\" (UniqueName: \"kubernetes.io/projected/b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7-kube-api-access-2h56l\") pod \"coredns-7db6d8ff4d-zkpxx\" (UID: \"b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7\") " pod="kube-system/coredns-7db6d8ff4d-zkpxx" Jan 13 21:22:16.582966 kubelet[2593]: I0113 21:22:16.582660 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89hzv\" (UniqueName: \"kubernetes.io/projected/10badecf-9cf2-455b-8b0e-b7541f200545-kube-api-access-89hzv\") pod \"calico-kube-controllers-6496d4bbf-28tcw\" (UID: \"10badecf-9cf2-455b-8b0e-b7541f200545\") " pod="calico-system/calico-kube-controllers-6496d4bbf-28tcw" Jan 13 21:22:16.582966 kubelet[2593]: I0113 21:22:16.582683 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95694724-301a-4ddf-b650-581e197892dc-config-volume\") pod \"coredns-7db6d8ff4d-wv2tn\" (UID: \"95694724-301a-4ddf-b650-581e197892dc\") " pod="kube-system/coredns-7db6d8ff4d-wv2tn" Jan 13 21:22:16.582966 kubelet[2593]: I0113 21:22:16.582701 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c65feba5-e029-4c69-b7ee-c32a5deacfc3-calico-apiserver-certs\") pod \"calico-apiserver-64584b8b84-x9bt6\" (UID: \"c65feba5-e029-4c69-b7ee-c32a5deacfc3\") " pod="calico-apiserver/calico-apiserver-64584b8b84-x9bt6" Jan 13 21:22:16.582966 kubelet[2593]: I0113 21:22:16.582730 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4b08e0a5-05ea-4ffe-b58b-456364b2d1ae-calico-apiserver-certs\") pod \"calico-apiserver-64584b8b84-2rfb7\" (UID: \"4b08e0a5-05ea-4ffe-b58b-456364b2d1ae\") " pod="calico-apiserver/calico-apiserver-64584b8b84-2rfb7" Jan 13 21:22:16.583175 kubelet[2593]: I0113 21:22:16.582750 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7-config-volume\") pod \"coredns-7db6d8ff4d-zkpxx\" (UID: \"b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7\") " pod="kube-system/coredns-7db6d8ff4d-zkpxx" Jan 13 21:22:16.583175 kubelet[2593]: I0113 21:22:16.582768 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10badecf-9cf2-455b-8b0e-b7541f200545-tigera-ca-bundle\") pod \"calico-kube-controllers-6496d4bbf-28tcw\" (UID: \"10badecf-9cf2-455b-8b0e-b7541f200545\") " pod="calico-system/calico-kube-controllers-6496d4bbf-28tcw" Jan 13 21:22:16.583175 kubelet[2593]: I0113 21:22:16.582797 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsw7b\" (UniqueName: \"kubernetes.io/projected/95694724-301a-4ddf-b650-581e197892dc-kube-api-access-tsw7b\") pod \"coredns-7db6d8ff4d-wv2tn\" (UID: \"95694724-301a-4ddf-b650-581e197892dc\") " pod="kube-system/coredns-7db6d8ff4d-wv2tn" Jan 13 21:22:16.583175 kubelet[2593]: I0113 21:22:16.582818 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkmpb\" (UniqueName: \"kubernetes.io/projected/c65feba5-e029-4c69-b7ee-c32a5deacfc3-kube-api-access-vkmpb\") pod \"calico-apiserver-64584b8b84-x9bt6\" (UID: \"c65feba5-e029-4c69-b7ee-c32a5deacfc3\") " pod="calico-apiserver/calico-apiserver-64584b8b84-x9bt6" Jan 13 21:22:16.583175 kubelet[2593]: I0113 21:22:16.582902 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwcwc\" (UniqueName: \"kubernetes.io/projected/4b08e0a5-05ea-4ffe-b58b-456364b2d1ae-kube-api-access-fwcwc\") pod \"calico-apiserver-64584b8b84-2rfb7\" (UID: \"4b08e0a5-05ea-4ffe-b58b-456364b2d1ae\") " pod="calico-apiserver/calico-apiserver-64584b8b84-2rfb7" Jan 13 21:22:16.586057 containerd[1459]: time="2025-01-13T21:22:16.585989789Z" level=info msg="shim disconnected" id=283ff4e5cc127a52104653b846aca3b7dc27a0081fcc4e78e8430e08c51257a2 namespace=k8s.io Jan 13 21:22:16.586057 containerd[1459]: time="2025-01-13T21:22:16.586036988Z" level=warning msg="cleaning up after shim disconnected" id=283ff4e5cc127a52104653b846aca3b7dc27a0081fcc4e78e8430e08c51257a2 namespace=k8s.io Jan 13 21:22:16.586057 containerd[1459]: time="2025-01-13T21:22:16.586045003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:22:16.669405 systemd[1]: Started sshd@10-10.0.0.66:22-10.0.0.1:46932.service - OpenSSH per-connection server daemon (10.0.0.1:46932). Jan 13 21:22:16.718048 sshd[3324]: Accepted publickey for core from 10.0.0.1 port 46932 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:16.719384 sshd[3324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:16.723353 systemd-logind[1443]: New session 11 of user core. Jan 13 21:22:16.733040 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:22:16.838737 sshd[3324]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:16.843118 systemd[1]: sshd@10-10.0.0.66:22-10.0.0.1:46932.service: Deactivated successfully. Jan 13 21:22:16.845070 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:22:16.845712 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:22:16.846672 systemd-logind[1443]: Removed session 11. Jan 13 21:22:16.858326 kubelet[2593]: E0113 21:22:16.858296 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:16.858868 containerd[1459]: time="2025-01-13T21:22:16.858826940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zkpxx,Uid:b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:16.864486 kubelet[2593]: E0113 21:22:16.864420 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:16.864752 containerd[1459]: time="2025-01-13T21:22:16.864729543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wv2tn,Uid:95694724-301a-4ddf-b650-581e197892dc,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:16.871114 containerd[1459]: time="2025-01-13T21:22:16.871066531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6496d4bbf-28tcw,Uid:10badecf-9cf2-455b-8b0e-b7541f200545,Namespace:calico-system,Attempt:0,}" Jan 13 21:22:16.874752 containerd[1459]: time="2025-01-13T21:22:16.874711589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64584b8b84-x9bt6,Uid:c65feba5-e029-4c69-b7ee-c32a5deacfc3,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:22:16.879205 containerd[1459]: time="2025-01-13T21:22:16.879178704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64584b8b84-2rfb7,Uid:4b08e0a5-05ea-4ffe-b58b-456364b2d1ae,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:22:17.029226 containerd[1459]: time="2025-01-13T21:22:17.028999724Z" level=error msg="Failed to destroy network for sandbox \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.029616 containerd[1459]: time="2025-01-13T21:22:17.029594391Z" level=error msg="encountered an error cleaning up failed sandbox \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.029725 containerd[1459]: time="2025-01-13T21:22:17.029704868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6496d4bbf-28tcw,Uid:10badecf-9cf2-455b-8b0e-b7541f200545,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.033678 containerd[1459]: time="2025-01-13T21:22:17.033586121Z" level=error msg="Failed to destroy network for sandbox \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.033992 containerd[1459]: time="2025-01-13T21:22:17.033968911Z" level=error msg="encountered an error cleaning up failed sandbox \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.034905 containerd[1459]: time="2025-01-13T21:22:17.034813147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wv2tn,Uid:95694724-301a-4ddf-b650-581e197892dc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.034986 containerd[1459]: time="2025-01-13T21:22:17.034940375Z" level=error msg="Failed to destroy network for sandbox \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.035469 containerd[1459]: time="2025-01-13T21:22:17.035317714Z" level=error msg="encountered an error cleaning up failed sandbox \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.035469 containerd[1459]: time="2025-01-13T21:22:17.035362389Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64584b8b84-2rfb7,Uid:4b08e0a5-05ea-4ffe-b58b-456364b2d1ae,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.035550 containerd[1459]: time="2025-01-13T21:22:17.035530474Z" level=error msg="Failed to destroy network for sandbox \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.035910 containerd[1459]: time="2025-01-13T21:22:17.035846017Z" level=error msg="encountered an error cleaning up failed sandbox \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.035958 containerd[1459]: time="2025-01-13T21:22:17.035919015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zkpxx,Uid:b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.036918 containerd[1459]: time="2025-01-13T21:22:17.036842500Z" level=error msg="Failed to destroy network for sandbox \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.037214 containerd[1459]: time="2025-01-13T21:22:17.037177609Z" level=error msg="encountered an error cleaning up failed sandbox \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.037249 containerd[1459]: time="2025-01-13T21:22:17.037221331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64584b8b84-x9bt6,Uid:c65feba5-e029-4c69-b7ee-c32a5deacfc3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.040161 kubelet[2593]: E0113 21:22:17.040056 2593 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.040161 kubelet[2593]: E0113 21:22:17.040113 2593 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.040161 kubelet[2593]: E0113 21:22:17.040104 2593 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.040161 kubelet[2593]: E0113 21:22:17.040101 2593 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.040290 kubelet[2593]: E0113 21:22:17.040144 2593 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wv2tn" Jan 13 21:22:17.040290 kubelet[2593]: E0113 21:22:17.040162 2593 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zkpxx" Jan 13 21:22:17.040290 kubelet[2593]: E0113 21:22:17.040170 2593 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wv2tn" Jan 13 21:22:17.040290 kubelet[2593]: E0113 21:22:17.040184 2593 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zkpxx" Jan 13 21:22:17.040382 kubelet[2593]: E0113 21:22:17.040144 2593 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64584b8b84-x9bt6" Jan 13 21:22:17.040382 kubelet[2593]: E0113 21:22:17.040211 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wv2tn_kube-system(95694724-301a-4ddf-b650-581e197892dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wv2tn_kube-system(95694724-301a-4ddf-b650-581e197892dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wv2tn" podUID="95694724-301a-4ddf-b650-581e197892dc" Jan 13 21:22:17.040382 kubelet[2593]: E0113 21:22:17.040221 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zkpxx_kube-system(b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zkpxx_kube-system(b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zkpxx" podUID="b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7" Jan 13 21:22:17.040491 kubelet[2593]: E0113 21:22:17.040162 2593 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64584b8b84-2rfb7" Jan 13 21:22:17.040491 kubelet[2593]: E0113 21:22:17.040216 2593 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64584b8b84-x9bt6" Jan 13 21:22:17.040491 kubelet[2593]: E0113 21:22:17.040247 2593 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64584b8b84-2rfb7" Jan 13 21:22:17.040564 kubelet[2593]: E0113 21:22:17.040264 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64584b8b84-x9bt6_calico-apiserver(c65feba5-e029-4c69-b7ee-c32a5deacfc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64584b8b84-x9bt6_calico-apiserver(c65feba5-e029-4c69-b7ee-c32a5deacfc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64584b8b84-x9bt6" podUID="c65feba5-e029-4c69-b7ee-c32a5deacfc3" Jan 13 21:22:17.040564 kubelet[2593]: E0113 21:22:17.040056 2593 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.040564 kubelet[2593]: E0113 21:22:17.040281 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64584b8b84-2rfb7_calico-apiserver(4b08e0a5-05ea-4ffe-b58b-456364b2d1ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64584b8b84-2rfb7_calico-apiserver(4b08e0a5-05ea-4ffe-b58b-456364b2d1ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64584b8b84-2rfb7" podUID="4b08e0a5-05ea-4ffe-b58b-456364b2d1ae" Jan 13 21:22:17.040666 kubelet[2593]: E0113 21:22:17.040292 2593 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6496d4bbf-28tcw" Jan 13 21:22:17.040666 kubelet[2593]: E0113 21:22:17.040313 2593 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6496d4bbf-28tcw" Jan 13 21:22:17.040666 kubelet[2593]: E0113 21:22:17.040335 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6496d4bbf-28tcw_calico-system(10badecf-9cf2-455b-8b0e-b7541f200545)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6496d4bbf-28tcw_calico-system(10badecf-9cf2-455b-8b0e-b7541f200545)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6496d4bbf-28tcw" podUID="10badecf-9cf2-455b-8b0e-b7541f200545" Jan 13 21:22:17.533168 kubelet[2593]: E0113 21:22:17.533047 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:17.533646 kubelet[2593]: I0113 21:22:17.533627 2593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:17.534071 containerd[1459]: time="2025-01-13T21:22:17.534039098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:22:17.534125 containerd[1459]: time="2025-01-13T21:22:17.534094011Z" level=info msg="StopPodSandbox for \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\"" Jan 13 21:22:17.534271 containerd[1459]: time="2025-01-13T21:22:17.534217994Z" level=info msg="Ensure that sandbox d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8 in task-service has been cleanup successfully" Jan 13 21:22:17.535592 kubelet[2593]: I0113 21:22:17.535093 2593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:17.535687 containerd[1459]: time="2025-01-13T21:22:17.535433178Z" level=info msg="StopPodSandbox for \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\"" Jan 13 21:22:17.535687 containerd[1459]: time="2025-01-13T21:22:17.535561478Z" level=info msg="Ensure that sandbox 459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53 in task-service has been cleanup successfully" Jan 13 21:22:17.537341 kubelet[2593]: I0113 21:22:17.537308 2593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:17.538685 containerd[1459]: time="2025-01-13T21:22:17.537747276Z" level=info msg="StopPodSandbox for \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\"" Jan 13 21:22:17.538685 containerd[1459]: time="2025-01-13T21:22:17.537914951Z" level=info msg="Ensure that sandbox 04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a in task-service has been cleanup successfully" Jan 13 21:22:17.538775 kubelet[2593]: I0113 21:22:17.538738 2593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:17.539639 containerd[1459]: time="2025-01-13T21:22:17.539608102Z" level=info msg="StopPodSandbox for \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\"" Jan 13 21:22:17.539963 containerd[1459]: time="2025-01-13T21:22:17.539736393Z" level=info msg="Ensure that sandbox 70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1 in task-service has been cleanup successfully" Jan 13 21:22:17.544471 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1-shm.mount: Deactivated successfully. Jan 13 21:22:17.547763 kubelet[2593]: I0113 21:22:17.547733 2593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:17.551133 containerd[1459]: time="2025-01-13T21:22:17.550280565Z" level=info msg="StopPodSandbox for \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\"" Jan 13 21:22:17.551133 containerd[1459]: time="2025-01-13T21:22:17.550455463Z" level=info msg="Ensure that sandbox be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1 in task-service has been cleanup successfully" Jan 13 21:22:17.580375 containerd[1459]: time="2025-01-13T21:22:17.580258620Z" level=error msg="StopPodSandbox for \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\" failed" error="failed to destroy network for sandbox \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.580779 kubelet[2593]: E0113 21:22:17.580727 2593 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:17.580890 kubelet[2593]: E0113 21:22:17.580798 2593 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53"} Jan 13 21:22:17.580926 kubelet[2593]: E0113 21:22:17.580897 2593 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b08e0a5-05ea-4ffe-b58b-456364b2d1ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:17.580999 kubelet[2593]: E0113 21:22:17.580929 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b08e0a5-05ea-4ffe-b58b-456364b2d1ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64584b8b84-2rfb7" podUID="4b08e0a5-05ea-4ffe-b58b-456364b2d1ae" Jan 13 21:22:17.597970 containerd[1459]: time="2025-01-13T21:22:17.597909214Z" level=error msg="StopPodSandbox for \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\" failed" error="failed to destroy network for sandbox \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.598379 kubelet[2593]: E0113 21:22:17.598151 2593 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:17.598379 kubelet[2593]: E0113 21:22:17.598207 2593 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8"} Jan 13 21:22:17.598379 kubelet[2593]: E0113 21:22:17.598248 2593 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10badecf-9cf2-455b-8b0e-b7541f200545\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:17.598379 kubelet[2593]: E0113 21:22:17.598278 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10badecf-9cf2-455b-8b0e-b7541f200545\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6496d4bbf-28tcw" podUID="10badecf-9cf2-455b-8b0e-b7541f200545" Jan 13 21:22:17.600388 containerd[1459]: time="2025-01-13T21:22:17.600295639Z" level=error msg="StopPodSandbox for \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\" failed" error="failed to destroy network for sandbox \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.600526 kubelet[2593]: E0113 21:22:17.600474 2593 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:17.600585 kubelet[2593]: E0113 21:22:17.600527 2593 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1"} Jan 13 21:22:17.600585 kubelet[2593]: E0113 21:22:17.600553 2593 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"95694724-301a-4ddf-b650-581e197892dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:17.600585 kubelet[2593]: E0113 21:22:17.600577 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"95694724-301a-4ddf-b650-581e197892dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wv2tn" podUID="95694724-301a-4ddf-b650-581e197892dc" Jan 13 21:22:17.603364 containerd[1459]: time="2025-01-13T21:22:17.603325632Z" level=error msg="StopPodSandbox for \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\" failed" error="failed to destroy network for sandbox \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.603523 kubelet[2593]: E0113 21:22:17.603500 2593 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:17.603560 kubelet[2593]: E0113 21:22:17.603530 2593 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a"} Jan 13 21:22:17.603560 kubelet[2593]: E0113 21:22:17.603551 2593 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c65feba5-e029-4c69-b7ee-c32a5deacfc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:17.603657 kubelet[2593]: E0113 21:22:17.603570 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c65feba5-e029-4c69-b7ee-c32a5deacfc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64584b8b84-x9bt6" podUID="c65feba5-e029-4c69-b7ee-c32a5deacfc3" Jan 13 21:22:17.608281 containerd[1459]: time="2025-01-13T21:22:17.608244674Z" level=error msg="StopPodSandbox for \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\" failed" error="failed to destroy network for sandbox \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:17.608434 kubelet[2593]: E0113 21:22:17.608407 2593 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:17.608492 kubelet[2593]: E0113 21:22:17.608439 2593 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1"} Jan 13 21:22:17.608492 kubelet[2593]: E0113 21:22:17.608470 2593 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:17.608492 kubelet[2593]: E0113 21:22:17.608487 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zkpxx" podUID="b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7" Jan 13 21:22:18.463238 systemd[1]: Created slice kubepods-besteffort-pod2faded03_4e90_4e3a_85c7_86d52abea6de.slice - libcontainer container kubepods-besteffort-pod2faded03_4e90_4e3a_85c7_86d52abea6de.slice. Jan 13 21:22:18.465188 containerd[1459]: time="2025-01-13T21:22:18.465158618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d4pmk,Uid:2faded03-4e90-4e3a-85c7-86d52abea6de,Namespace:calico-system,Attempt:0,}" Jan 13 21:22:18.517480 containerd[1459]: time="2025-01-13T21:22:18.517430600Z" level=error msg="Failed to destroy network for sandbox \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:18.517828 containerd[1459]: time="2025-01-13T21:22:18.517796377Z" level=error msg="encountered an error cleaning up failed sandbox \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:18.517903 containerd[1459]: time="2025-01-13T21:22:18.517882159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d4pmk,Uid:2faded03-4e90-4e3a-85c7-86d52abea6de,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:18.518114 kubelet[2593]: E0113 21:22:18.518070 2593 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:18.518202 kubelet[2593]: E0113 21:22:18.518130 2593 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d4pmk" Jan 13 21:22:18.518202 kubelet[2593]: E0113 21:22:18.518150 2593 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d4pmk" Jan 13 21:22:18.518250 kubelet[2593]: E0113 21:22:18.518191 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d4pmk_calico-system(2faded03-4e90-4e3a-85c7-86d52abea6de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d4pmk_calico-system(2faded03-4e90-4e3a-85c7-86d52abea6de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d4pmk" podUID="2faded03-4e90-4e3a-85c7-86d52abea6de" Jan 13 21:22:18.520369 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e-shm.mount: Deactivated successfully. Jan 13 21:22:18.549476 kubelet[2593]: I0113 21:22:18.549439 2593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:18.550433 containerd[1459]: time="2025-01-13T21:22:18.550076271Z" level=info msg="StopPodSandbox for \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\"" Jan 13 21:22:18.550433 containerd[1459]: time="2025-01-13T21:22:18.550261970Z" level=info msg="Ensure that sandbox 1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e in task-service has been cleanup successfully" Jan 13 21:22:18.576315 containerd[1459]: time="2025-01-13T21:22:18.576253911Z" level=error msg="StopPodSandbox for \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\" failed" error="failed to destroy network for sandbox \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:22:18.576568 kubelet[2593]: E0113 21:22:18.576516 2593 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:18.576619 kubelet[2593]: E0113 21:22:18.576571 2593 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e"} Jan 13 21:22:18.576619 kubelet[2593]: E0113 21:22:18.576603 2593 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2faded03-4e90-4e3a-85c7-86d52abea6de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:22:18.576713 kubelet[2593]: E0113 21:22:18.576635 2593 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2faded03-4e90-4e3a-85c7-86d52abea6de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d4pmk" podUID="2faded03-4e90-4e3a-85c7-86d52abea6de" Jan 13 21:22:21.860098 systemd[1]: Started sshd@11-10.0.0.66:22-10.0.0.1:50826.service - OpenSSH per-connection server daemon (10.0.0.1:50826). Jan 13 21:22:21.890265 sshd[3705]: Accepted publickey for core from 10.0.0.1 port 50826 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:21.892405 sshd[3705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:21.896567 systemd-logind[1443]: New session 12 of user core. Jan 13 21:22:21.903971 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:22:21.911482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1078317923.mount: Deactivated successfully. Jan 13 21:22:22.099838 sshd[3705]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:22.110810 systemd[1]: sshd@11-10.0.0.66:22-10.0.0.1:50826.service: Deactivated successfully. Jan 13 21:22:22.112694 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:22:22.114300 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:22:22.119342 systemd[1]: Started sshd@12-10.0.0.66:22-10.0.0.1:50842.service - OpenSSH per-connection server daemon (10.0.0.1:50842). Jan 13 21:22:22.120228 systemd-logind[1443]: Removed session 12. Jan 13 21:22:22.146022 sshd[3720]: Accepted publickey for core from 10.0.0.1 port 50842 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:22.147471 sshd[3720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:22.151058 systemd-logind[1443]: New session 13 of user core. Jan 13 21:22:22.160974 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:22:22.344450 sshd[3720]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:22.352018 systemd[1]: sshd@12-10.0.0.66:22-10.0.0.1:50842.service: Deactivated successfully. Jan 13 21:22:22.353967 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:22:22.355736 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:22:22.362086 systemd[1]: Started sshd@13-10.0.0.66:22-10.0.0.1:50856.service - OpenSSH per-connection server daemon (10.0.0.1:50856). Jan 13 21:22:22.363416 systemd-logind[1443]: Removed session 13. Jan 13 21:22:22.389041 sshd[3733]: Accepted publickey for core from 10.0.0.1 port 50856 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:22.390618 sshd[3733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:22.394842 systemd-logind[1443]: New session 14 of user core. Jan 13 21:22:22.403104 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:22:22.514316 sshd[3733]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:22.518584 systemd[1]: sshd@13-10.0.0.66:22-10.0.0.1:50856.service: Deactivated successfully. Jan 13 21:22:22.520675 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:22:22.521430 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:22:22.522452 systemd-logind[1443]: Removed session 14. Jan 13 21:22:22.711206 containerd[1459]: time="2025-01-13T21:22:22.711132117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:22.712155 containerd[1459]: time="2025-01-13T21:22:22.712075470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:22:22.713491 containerd[1459]: time="2025-01-13T21:22:22.713467102Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:22.716559 containerd[1459]: time="2025-01-13T21:22:22.716506610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.182422658s" Jan 13 21:22:22.716626 containerd[1459]: time="2025-01-13T21:22:22.716557035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:22:22.721515 containerd[1459]: time="2025-01-13T21:22:22.721471144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:22.725005 containerd[1459]: time="2025-01-13T21:22:22.724903930Z" level=info msg="CreateContainer within sandbox \"1223c0766ef6339cecd60bee6dc38efd9a78c9856ef589b95163f93f35b7f4b2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:22:22.756483 containerd[1459]: time="2025-01-13T21:22:22.756435965Z" level=info msg="CreateContainer within sandbox \"1223c0766ef6339cecd60bee6dc38efd9a78c9856ef589b95163f93f35b7f4b2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e46a415422d82c7a5fa2e6d95fd750505d31509a837d02250b177e764a589eae\"" Jan 13 21:22:22.757055 containerd[1459]: time="2025-01-13T21:22:22.757023619Z" level=info msg="StartContainer for \"e46a415422d82c7a5fa2e6d95fd750505d31509a837d02250b177e764a589eae\"" Jan 13 21:22:22.824985 systemd[1]: Started cri-containerd-e46a415422d82c7a5fa2e6d95fd750505d31509a837d02250b177e764a589eae.scope - libcontainer container e46a415422d82c7a5fa2e6d95fd750505d31509a837d02250b177e764a589eae. Jan 13 21:22:22.857985 containerd[1459]: time="2025-01-13T21:22:22.857938781Z" level=info msg="StartContainer for \"e46a415422d82c7a5fa2e6d95fd750505d31509a837d02250b177e764a589eae\" returns successfully" Jan 13 21:22:22.919580 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:22:22.919725 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:22:23.583692 kubelet[2593]: E0113 21:22:23.583655 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:23.595827 kubelet[2593]: I0113 21:22:23.595765 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zx4mv" podStartSLOduration=1.532187994 podStartE2EDuration="19.59574803s" podCreationTimestamp="2025-01-13 21:22:04 +0000 UTC" firstStartedPulling="2025-01-13 21:22:04.653715048 +0000 UTC m=+23.271245301" lastFinishedPulling="2025-01-13 21:22:22.717275084 +0000 UTC m=+41.334805337" observedRunningTime="2025-01-13 21:22:23.594718657 +0000 UTC m=+42.212248911" watchObservedRunningTime="2025-01-13 21:22:23.59574803 +0000 UTC m=+42.213278283" Jan 13 21:22:24.585332 kubelet[2593]: E0113 21:22:24.585295 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:27.526143 systemd[1]: Started sshd@14-10.0.0.66:22-10.0.0.1:54414.service - OpenSSH per-connection server daemon (10.0.0.1:54414). Jan 13 21:22:27.564039 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 54414 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:27.565725 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:27.570229 systemd-logind[1443]: New session 15 of user core. Jan 13 21:22:27.586048 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:22:27.704186 sshd[4034]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:27.708357 systemd[1]: sshd@14-10.0.0.66:22-10.0.0.1:54414.service: Deactivated successfully. Jan 13 21:22:27.710395 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:22:27.711071 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:22:27.711917 systemd-logind[1443]: Removed session 15. Jan 13 21:22:28.458602 containerd[1459]: time="2025-01-13T21:22:28.458518315Z" level=info msg="StopPodSandbox for \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\"" Jan 13 21:22:28.459077 containerd[1459]: time="2025-01-13T21:22:28.458601722Z" level=info msg="StopPodSandbox for \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\"" Jan 13 21:22:28.567329 containerd[1459]: 2025-01-13 21:22:28.502 [INFO][4104] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:28.567329 containerd[1459]: 2025-01-13 21:22:28.502 [INFO][4104] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" iface="eth0" netns="/var/run/netns/cni-6a4c9c58-3647-48cb-c978-2a2c3160d7c3" Jan 13 21:22:28.567329 containerd[1459]: 2025-01-13 21:22:28.502 [INFO][4104] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" iface="eth0" netns="/var/run/netns/cni-6a4c9c58-3647-48cb-c978-2a2c3160d7c3" Jan 13 21:22:28.567329 containerd[1459]: 2025-01-13 21:22:28.504 [INFO][4104] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" iface="eth0" netns="/var/run/netns/cni-6a4c9c58-3647-48cb-c978-2a2c3160d7c3" Jan 13 21:22:28.567329 containerd[1459]: 2025-01-13 21:22:28.505 [INFO][4104] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:28.567329 containerd[1459]: 2025-01-13 21:22:28.505 [INFO][4104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:28.567329 containerd[1459]: 2025-01-13 21:22:28.555 [INFO][4118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" HandleID="k8s-pod-network.d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Workload="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:28.567329 containerd[1459]: 2025-01-13 21:22:28.555 [INFO][4118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:28.567329 containerd[1459]: 2025-01-13 21:22:28.556 [INFO][4118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:28.567329 containerd[1459]: 2025-01-13 21:22:28.561 [WARNING][4118] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" HandleID="k8s-pod-network.d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Workload="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:28.567329 containerd[1459]: 2025-01-13 21:22:28.561 [INFO][4118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" HandleID="k8s-pod-network.d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Workload="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:28.567329 containerd[1459]: 2025-01-13 21:22:28.562 [INFO][4118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:28.567329 containerd[1459]: 2025-01-13 21:22:28.565 [INFO][4104] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:28.568064 containerd[1459]: time="2025-01-13T21:22:28.568025332Z" level=info msg="TearDown network for sandbox \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\" successfully" Jan 13 21:22:28.568064 containerd[1459]: time="2025-01-13T21:22:28.568058133Z" level=info msg="StopPodSandbox for \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\" returns successfully" Jan 13 21:22:28.568902 containerd[1459]: time="2025-01-13T21:22:28.568828188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6496d4bbf-28tcw,Uid:10badecf-9cf2-455b-8b0e-b7541f200545,Namespace:calico-system,Attempt:1,}" Jan 13 21:22:28.570301 systemd[1]: run-netns-cni\x2d6a4c9c58\x2d3647\x2d48cb\x2dc978\x2d2a2c3160d7c3.mount: Deactivated successfully. Jan 13 21:22:28.574698 containerd[1459]: 2025-01-13 21:22:28.504 [INFO][4103] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:28.574698 containerd[1459]: 2025-01-13 21:22:28.504 [INFO][4103] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" iface="eth0" netns="/var/run/netns/cni-71dfd5ef-5d7a-6e1d-e1cd-6f224c49e967" Jan 13 21:22:28.574698 containerd[1459]: 2025-01-13 21:22:28.505 [INFO][4103] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" iface="eth0" netns="/var/run/netns/cni-71dfd5ef-5d7a-6e1d-e1cd-6f224c49e967" Jan 13 21:22:28.574698 containerd[1459]: 2025-01-13 21:22:28.505 [INFO][4103] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" iface="eth0" netns="/var/run/netns/cni-71dfd5ef-5d7a-6e1d-e1cd-6f224c49e967" Jan 13 21:22:28.574698 containerd[1459]: 2025-01-13 21:22:28.505 [INFO][4103] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:28.574698 containerd[1459]: 2025-01-13 21:22:28.505 [INFO][4103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:28.574698 containerd[1459]: 2025-01-13 21:22:28.555 [INFO][4119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" HandleID="k8s-pod-network.be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Workload="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:28.574698 containerd[1459]: 2025-01-13 21:22:28.556 [INFO][4119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:28.574698 containerd[1459]: 2025-01-13 21:22:28.562 [INFO][4119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:28.574698 containerd[1459]: 2025-01-13 21:22:28.568 [WARNING][4119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" HandleID="k8s-pod-network.be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Workload="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:28.574698 containerd[1459]: 2025-01-13 21:22:28.568 [INFO][4119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" HandleID="k8s-pod-network.be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Workload="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:28.574698 containerd[1459]: 2025-01-13 21:22:28.569 [INFO][4119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:28.574698 containerd[1459]: 2025-01-13 21:22:28.572 [INFO][4103] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:28.575059 containerd[1459]: time="2025-01-13T21:22:28.574913862Z" level=info msg="TearDown network for sandbox \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\" successfully" Jan 13 21:22:28.575059 containerd[1459]: time="2025-01-13T21:22:28.574934070Z" level=info msg="StopPodSandbox for \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\" returns successfully" Jan 13 21:22:28.575508 kubelet[2593]: E0113 21:22:28.575332 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:28.576102 containerd[1459]: time="2025-01-13T21:22:28.576073719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wv2tn,Uid:95694724-301a-4ddf-b650-581e197892dc,Namespace:kube-system,Attempt:1,}" Jan 13 21:22:28.578236 systemd[1]: run-netns-cni\x2d71dfd5ef\x2d5d7a\x2d6e1d\x2de1cd\x2d6f224c49e967.mount: Deactivated successfully. Jan 13 21:22:28.696047 systemd-networkd[1390]: calie42b02478fb: Link UP Jan 13 21:22:28.697149 systemd-networkd[1390]: calie42b02478fb: Gained carrier Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.609 [INFO][4133] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.620 [INFO][4133] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0 calico-kube-controllers-6496d4bbf- calico-system 10badecf-9cf2-455b-8b0e-b7541f200545 923 0 2025-01-13 21:22:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6496d4bbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6496d4bbf-28tcw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie42b02478fb [] []}} ContainerID="917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" Namespace="calico-system" Pod="calico-kube-controllers-6496d4bbf-28tcw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-" Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.620 [INFO][4133] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" Namespace="calico-system" Pod="calico-kube-controllers-6496d4bbf-28tcw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.651 [INFO][4162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" HandleID="k8s-pod-network.917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" Workload="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.659 [INFO][4162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" HandleID="k8s-pod-network.917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" Workload="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4a00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6496d4bbf-28tcw", "timestamp":"2025-01-13 21:22:28.651237941 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.659 [INFO][4162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.659 [INFO][4162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.659 [INFO][4162] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.661 [INFO][4162] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" host="localhost" Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.667 [INFO][4162] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.671 [INFO][4162] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.673 [INFO][4162] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.674 [INFO][4162] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.674 [INFO][4162] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" host="localhost" Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.676 [INFO][4162] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.682 [INFO][4162] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" host="localhost" Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.686 [INFO][4162] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" host="localhost" Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.686 [INFO][4162] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" host="localhost" Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.686 [INFO][4162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:28.708791 containerd[1459]: 2025-01-13 21:22:28.686 [INFO][4162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" HandleID="k8s-pod-network.917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" Workload="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:28.710430 containerd[1459]: 2025-01-13 21:22:28.688 [INFO][4133] cni-plugin/k8s.go 386: Populated endpoint ContainerID="917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" Namespace="calico-system" Pod="calico-kube-controllers-6496d4bbf-28tcw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0", GenerateName:"calico-kube-controllers-6496d4bbf-", Namespace:"calico-system", SelfLink:"", UID:"10badecf-9cf2-455b-8b0e-b7541f200545", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6496d4bbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6496d4bbf-28tcw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie42b02478fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:28.710430 containerd[1459]: 2025-01-13 21:22:28.688 [INFO][4133] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" Namespace="calico-system" Pod="calico-kube-controllers-6496d4bbf-28tcw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:28.710430 containerd[1459]: 2025-01-13 21:22:28.688 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie42b02478fb ContainerID="917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" Namespace="calico-system" Pod="calico-kube-controllers-6496d4bbf-28tcw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:28.710430 containerd[1459]: 2025-01-13 21:22:28.696 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" Namespace="calico-system" Pod="calico-kube-controllers-6496d4bbf-28tcw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:28.710430 containerd[1459]: 2025-01-13 21:22:28.697 [INFO][4133] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" Namespace="calico-system" Pod="calico-kube-controllers-6496d4bbf-28tcw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0", GenerateName:"calico-kube-controllers-6496d4bbf-", Namespace:"calico-system", SelfLink:"", UID:"10badecf-9cf2-455b-8b0e-b7541f200545", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6496d4bbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b", Pod:"calico-kube-controllers-6496d4bbf-28tcw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie42b02478fb", MAC:"0a:c2:0f:01:b2:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:28.710430 containerd[1459]: 2025-01-13 21:22:28.706 [INFO][4133] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b" Namespace="calico-system" Pod="calico-kube-controllers-6496d4bbf-28tcw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:28.718013 systemd-networkd[1390]: cali2f49869b38f: Link UP Jan 13 21:22:28.718474 systemd-networkd[1390]: cali2f49869b38f: Gained carrier Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.621 [INFO][4146] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.631 [INFO][4146] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0 coredns-7db6d8ff4d- kube-system 95694724-301a-4ddf-b650-581e197892dc 924 0 2025-01-13 21:21:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-wv2tn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2f49869b38f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wv2tn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wv2tn-" Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.631 [INFO][4146] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wv2tn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.663 [INFO][4168] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" HandleID="k8s-pod-network.bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" Workload="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.670 [INFO][4168] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" HandleID="k8s-pod-network.bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" Workload="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000365790), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-wv2tn", "timestamp":"2025-01-13 21:22:28.663369212 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.670 [INFO][4168] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.686 [INFO][4168] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.686 [INFO][4168] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.688 [INFO][4168] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" host="localhost" Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.691 [INFO][4168] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.695 [INFO][4168] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.697 [INFO][4168] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.698 [INFO][4168] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.698 [INFO][4168] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" host="localhost" Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.699 [INFO][4168] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2 Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.703 [INFO][4168] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" host="localhost" Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.712 [INFO][4168] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" host="localhost" Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.712 [INFO][4168] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" host="localhost" Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.712 [INFO][4168] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:28.733483 containerd[1459]: 2025-01-13 21:22:28.712 [INFO][4168] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" HandleID="k8s-pod-network.bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" Workload="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:28.734086 containerd[1459]: 2025-01-13 21:22:28.715 [INFO][4146] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wv2tn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"95694724-301a-4ddf-b650-581e197892dc", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-wv2tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f49869b38f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:28.734086 containerd[1459]: 2025-01-13 21:22:28.715 [INFO][4146] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wv2tn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:28.734086 containerd[1459]: 2025-01-13 21:22:28.715 [INFO][4146] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f49869b38f ContainerID="bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wv2tn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:28.734086 containerd[1459]: 2025-01-13 21:22:28.718 [INFO][4146] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wv2tn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:28.734086 containerd[1459]: 2025-01-13 21:22:28.719 [INFO][4146] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wv2tn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"95694724-301a-4ddf-b650-581e197892dc", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2", Pod:"coredns-7db6d8ff4d-wv2tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f49869b38f", MAC:"22:22:d4:d7:30:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:28.734086 containerd[1459]: 2025-01-13 21:22:28.730 [INFO][4146] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wv2tn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:28.745508 containerd[1459]: time="2025-01-13T21:22:28.745405262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:28.745508 containerd[1459]: time="2025-01-13T21:22:28.745467389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:28.745508 containerd[1459]: time="2025-01-13T21:22:28.745481616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:28.745691 containerd[1459]: time="2025-01-13T21:22:28.745571324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:28.754741 containerd[1459]: time="2025-01-13T21:22:28.754442816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:28.754741 containerd[1459]: time="2025-01-13T21:22:28.754491327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:28.754741 containerd[1459]: time="2025-01-13T21:22:28.754512487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:28.754741 containerd[1459]: time="2025-01-13T21:22:28.754617755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:28.765287 systemd[1]: Started cri-containerd-917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b.scope - libcontainer container 917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b. Jan 13 21:22:28.770586 systemd[1]: Started cri-containerd-bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2.scope - libcontainer container bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2. Jan 13 21:22:28.780310 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:22:28.782759 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:22:28.805982 containerd[1459]: time="2025-01-13T21:22:28.805931145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6496d4bbf-28tcw,Uid:10badecf-9cf2-455b-8b0e-b7541f200545,Namespace:calico-system,Attempt:1,} returns sandbox id \"917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b\"" Jan 13 21:22:28.809876 containerd[1459]: time="2025-01-13T21:22:28.809736969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:22:28.811107 containerd[1459]: time="2025-01-13T21:22:28.811004649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wv2tn,Uid:95694724-301a-4ddf-b650-581e197892dc,Namespace:kube-system,Attempt:1,} returns sandbox id \"bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2\"" Jan 13 21:22:28.811767 kubelet[2593]: E0113 21:22:28.811702 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:28.814615 containerd[1459]: time="2025-01-13T21:22:28.814212180Z" level=info msg="CreateContainer within sandbox \"bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:22:28.832589 containerd[1459]: time="2025-01-13T21:22:28.832531026Z" level=info msg="CreateContainer within sandbox \"bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"89973380596283a24f08025065d5f2648f671b6f6ae5c6efe9b8f372ae604b96\"" Jan 13 21:22:28.833115 containerd[1459]: time="2025-01-13T21:22:28.833092059Z" level=info msg="StartContainer for \"89973380596283a24f08025065d5f2648f671b6f6ae5c6efe9b8f372ae604b96\"" Jan 13 21:22:28.863001 systemd[1]: Started cri-containerd-89973380596283a24f08025065d5f2648f671b6f6ae5c6efe9b8f372ae604b96.scope - libcontainer container 89973380596283a24f08025065d5f2648f671b6f6ae5c6efe9b8f372ae604b96. Jan 13 21:22:28.889810 containerd[1459]: time="2025-01-13T21:22:28.889767074Z" level=info msg="StartContainer for \"89973380596283a24f08025065d5f2648f671b6f6ae5c6efe9b8f372ae604b96\" returns successfully" Jan 13 21:22:29.460706 containerd[1459]: time="2025-01-13T21:22:29.460656424Z" level=info msg="StopPodSandbox for \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\"" Jan 13 21:22:29.541527 containerd[1459]: 2025-01-13 21:22:29.509 [INFO][4361] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:29.541527 containerd[1459]: 2025-01-13 21:22:29.510 [INFO][4361] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" iface="eth0" netns="/var/run/netns/cni-c1272a6c-8fc4-16d2-2482-75dad2c5d568" Jan 13 21:22:29.541527 containerd[1459]: 2025-01-13 21:22:29.510 [INFO][4361] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" iface="eth0" netns="/var/run/netns/cni-c1272a6c-8fc4-16d2-2482-75dad2c5d568" Jan 13 21:22:29.541527 containerd[1459]: 2025-01-13 21:22:29.510 [INFO][4361] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" iface="eth0" netns="/var/run/netns/cni-c1272a6c-8fc4-16d2-2482-75dad2c5d568" Jan 13 21:22:29.541527 containerd[1459]: 2025-01-13 21:22:29.510 [INFO][4361] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:29.541527 containerd[1459]: 2025-01-13 21:22:29.510 [INFO][4361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:29.541527 containerd[1459]: 2025-01-13 21:22:29.530 [INFO][4371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" HandleID="k8s-pod-network.04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Workload="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:29.541527 containerd[1459]: 2025-01-13 21:22:29.530 [INFO][4371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.541527 containerd[1459]: 2025-01-13 21:22:29.530 [INFO][4371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.541527 containerd[1459]: 2025-01-13 21:22:29.535 [WARNING][4371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" HandleID="k8s-pod-network.04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Workload="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:29.541527 containerd[1459]: 2025-01-13 21:22:29.535 [INFO][4371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" HandleID="k8s-pod-network.04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Workload="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:29.541527 containerd[1459]: 2025-01-13 21:22:29.536 [INFO][4371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.541527 containerd[1459]: 2025-01-13 21:22:29.538 [INFO][4361] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:29.542008 containerd[1459]: time="2025-01-13T21:22:29.541696986Z" level=info msg="TearDown network for sandbox \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\" successfully" Jan 13 21:22:29.542008 containerd[1459]: time="2025-01-13T21:22:29.541726812Z" level=info msg="StopPodSandbox for \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\" returns successfully" Jan 13 21:22:29.542372 containerd[1459]: time="2025-01-13T21:22:29.542327590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64584b8b84-x9bt6,Uid:c65feba5-e029-4c69-b7ee-c32a5deacfc3,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:22:29.575068 systemd[1]: run-netns-cni\x2dc1272a6c\x2d8fc4\x2d16d2\x2d2482\x2d75dad2c5d568.mount: Deactivated successfully. Jan 13 21:22:29.600055 kubelet[2593]: E0113 21:22:29.600017 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:29.609406 kubelet[2593]: I0113 21:22:29.609092 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wv2tn" podStartSLOduration=33.609070926 podStartE2EDuration="33.609070926s" podCreationTimestamp="2025-01-13 21:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:22:29.608958245 +0000 UTC m=+48.226488498" watchObservedRunningTime="2025-01-13 21:22:29.609070926 +0000 UTC m=+48.226601179" Jan 13 21:22:29.651215 systemd-networkd[1390]: calic32831cf31b: Link UP Jan 13 21:22:29.652254 systemd-networkd[1390]: calic32831cf31b: Gained carrier Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.573 [INFO][4379] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.581 [INFO][4379] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0 calico-apiserver-64584b8b84- calico-apiserver c65feba5-e029-4c69-b7ee-c32a5deacfc3 941 0 2025-01-13 21:22:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64584b8b84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-64584b8b84-x9bt6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic32831cf31b [] []}} ContainerID="c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-x9bt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-" Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.581 [INFO][4379] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-x9bt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.608 [INFO][4393] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" HandleID="k8s-pod-network.c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" Workload="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.617 [INFO][4393] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" HandleID="k8s-pod-network.c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" Workload="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001387a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-64584b8b84-x9bt6", "timestamp":"2025-01-13 21:22:29.608325487 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.617 [INFO][4393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.617 [INFO][4393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.617 [INFO][4393] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.621 [INFO][4393] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" host="localhost" Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.630 [INFO][4393] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.634 [INFO][4393] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.635 [INFO][4393] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.637 [INFO][4393] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.637 [INFO][4393] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" host="localhost" Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.638 [INFO][4393] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58 Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.641 [INFO][4393] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" host="localhost" Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.645 [INFO][4393] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" host="localhost" Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.645 [INFO][4393] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" host="localhost" Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.645 [INFO][4393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:29.669299 containerd[1459]: 2025-01-13 21:22:29.645 [INFO][4393] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" HandleID="k8s-pod-network.c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" Workload="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:29.669902 containerd[1459]: 2025-01-13 21:22:29.649 [INFO][4379] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-x9bt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0", GenerateName:"calico-apiserver-64584b8b84-", Namespace:"calico-apiserver", SelfLink:"", UID:"c65feba5-e029-4c69-b7ee-c32a5deacfc3", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64584b8b84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-64584b8b84-x9bt6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic32831cf31b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.669902 containerd[1459]: 2025-01-13 21:22:29.649 [INFO][4379] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-x9bt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:29.669902 containerd[1459]: 2025-01-13 21:22:29.649 [INFO][4379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic32831cf31b ContainerID="c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-x9bt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:29.669902 containerd[1459]: 2025-01-13 21:22:29.651 [INFO][4379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-x9bt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:29.669902 containerd[1459]: 2025-01-13 21:22:29.651 [INFO][4379] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-x9bt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0", GenerateName:"calico-apiserver-64584b8b84-", Namespace:"calico-apiserver", SelfLink:"", UID:"c65feba5-e029-4c69-b7ee-c32a5deacfc3", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64584b8b84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58", Pod:"calico-apiserver-64584b8b84-x9bt6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic32831cf31b", MAC:"f2:af:55:a3:05:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:29.669902 containerd[1459]: 2025-01-13 21:22:29.666 [INFO][4379] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-x9bt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:29.690114 containerd[1459]: time="2025-01-13T21:22:29.690039344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:29.690114 containerd[1459]: time="2025-01-13T21:22:29.690092513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:29.690114 containerd[1459]: time="2025-01-13T21:22:29.690103564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:29.690290 containerd[1459]: time="2025-01-13T21:22:29.690170770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:29.717021 systemd[1]: Started cri-containerd-c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58.scope - libcontainer container c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58. Jan 13 21:22:29.728982 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:22:29.752360 containerd[1459]: time="2025-01-13T21:22:29.752308302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64584b8b84-x9bt6,Uid:c65feba5-e029-4c69-b7ee-c32a5deacfc3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58\"" Jan 13 21:22:30.103061 systemd-networkd[1390]: cali2f49869b38f: Gained IPv6LL Jan 13 21:22:30.359011 systemd-networkd[1390]: calie42b02478fb: Gained IPv6LL Jan 13 21:22:30.460132 containerd[1459]: time="2025-01-13T21:22:30.459649492Z" level=info msg="StopPodSandbox for \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\"" Jan 13 21:22:30.460132 containerd[1459]: time="2025-01-13T21:22:30.459762354Z" level=info msg="StopPodSandbox for \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\"" Jan 13 21:22:30.585560 containerd[1459]: 2025-01-13 21:22:30.513 [INFO][4489] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:30.585560 containerd[1459]: 2025-01-13 21:22:30.514 [INFO][4489] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" iface="eth0" netns="/var/run/netns/cni-996eafdd-9877-64d1-cc92-ceb0519f617d" Jan 13 21:22:30.585560 containerd[1459]: 2025-01-13 21:22:30.515 [INFO][4489] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" iface="eth0" netns="/var/run/netns/cni-996eafdd-9877-64d1-cc92-ceb0519f617d" Jan 13 21:22:30.585560 containerd[1459]: 2025-01-13 21:22:30.516 [INFO][4489] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" iface="eth0" netns="/var/run/netns/cni-996eafdd-9877-64d1-cc92-ceb0519f617d" Jan 13 21:22:30.585560 containerd[1459]: 2025-01-13 21:22:30.516 [INFO][4489] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:30.585560 containerd[1459]: 2025-01-13 21:22:30.516 [INFO][4489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:30.585560 containerd[1459]: 2025-01-13 21:22:30.570 [INFO][4517] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" HandleID="k8s-pod-network.459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Workload="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:30.585560 containerd[1459]: 2025-01-13 21:22:30.570 [INFO][4517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:30.585560 containerd[1459]: 2025-01-13 21:22:30.570 [INFO][4517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:30.585560 containerd[1459]: 2025-01-13 21:22:30.576 [WARNING][4517] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" HandleID="k8s-pod-network.459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Workload="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:30.585560 containerd[1459]: 2025-01-13 21:22:30.576 [INFO][4517] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" HandleID="k8s-pod-network.459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Workload="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:30.585560 containerd[1459]: 2025-01-13 21:22:30.579 [INFO][4517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:30.585560 containerd[1459]: 2025-01-13 21:22:30.582 [INFO][4489] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:30.590871 containerd[1459]: time="2025-01-13T21:22:30.589118612Z" level=info msg="TearDown network for sandbox \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\" successfully" Jan 13 21:22:30.590871 containerd[1459]: time="2025-01-13T21:22:30.589162093Z" level=info msg="StopPodSandbox for \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\" returns successfully" Jan 13 21:22:30.589515 systemd[1]: run-netns-cni\x2d996eafdd\x2d9877\x2d64d1\x2dcc92\x2dceb0519f617d.mount: Deactivated successfully. Jan 13 21:22:30.591540 containerd[1459]: time="2025-01-13T21:22:30.591508538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64584b8b84-2rfb7,Uid:4b08e0a5-05ea-4ffe-b58b-456364b2d1ae,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:22:30.600680 containerd[1459]: 2025-01-13 21:22:30.533 [INFO][4498] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:30.600680 containerd[1459]: 2025-01-13 21:22:30.534 [INFO][4498] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" iface="eth0" netns="/var/run/netns/cni-d19545de-24eb-d2f6-dee6-ff5e4df91203" Jan 13 21:22:30.600680 containerd[1459]: 2025-01-13 21:22:30.534 [INFO][4498] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" iface="eth0" netns="/var/run/netns/cni-d19545de-24eb-d2f6-dee6-ff5e4df91203" Jan 13 21:22:30.600680 containerd[1459]: 2025-01-13 21:22:30.535 [INFO][4498] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" iface="eth0" netns="/var/run/netns/cni-d19545de-24eb-d2f6-dee6-ff5e4df91203" Jan 13 21:22:30.600680 containerd[1459]: 2025-01-13 21:22:30.535 [INFO][4498] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:30.600680 containerd[1459]: 2025-01-13 21:22:30.535 [INFO][4498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:30.600680 containerd[1459]: 2025-01-13 21:22:30.581 [INFO][4532] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" HandleID="k8s-pod-network.1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Workload="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:30.600680 containerd[1459]: 2025-01-13 21:22:30.581 [INFO][4532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:30.600680 containerd[1459]: 2025-01-13 21:22:30.581 [INFO][4532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:30.600680 containerd[1459]: 2025-01-13 21:22:30.592 [WARNING][4532] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" HandleID="k8s-pod-network.1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Workload="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:30.600680 containerd[1459]: 2025-01-13 21:22:30.592 [INFO][4532] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" HandleID="k8s-pod-network.1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Workload="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:30.600680 containerd[1459]: 2025-01-13 21:22:30.594 [INFO][4532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:30.600680 containerd[1459]: 2025-01-13 21:22:30.597 [INFO][4498] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:30.601971 containerd[1459]: time="2025-01-13T21:22:30.601932142Z" level=info msg="TearDown network for sandbox \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\" successfully" Jan 13 21:22:30.601971 containerd[1459]: time="2025-01-13T21:22:30.601962619Z" level=info msg="StopPodSandbox for \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\" returns successfully" Jan 13 21:22:30.603718 containerd[1459]: time="2025-01-13T21:22:30.602433883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d4pmk,Uid:2faded03-4e90-4e3a-85c7-86d52abea6de,Namespace:calico-system,Attempt:1,}" Jan 13 21:22:30.603976 systemd[1]: run-netns-cni\x2dd19545de\x2d24eb\x2dd2f6\x2ddee6\x2dff5e4df91203.mount: Deactivated successfully. Jan 13 21:22:30.748567 kubelet[2593]: E0113 21:22:30.607361 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:30.769531 containerd[1459]: time="2025-01-13T21:22:30.768755751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:30.771908 containerd[1459]: time="2025-01-13T21:22:30.771735845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 21:22:30.774525 containerd[1459]: time="2025-01-13T21:22:30.774490875Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:30.782041 containerd[1459]: time="2025-01-13T21:22:30.782003105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:30.782718 containerd[1459]: time="2025-01-13T21:22:30.782656772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.972885429s" Jan 13 21:22:30.782718 containerd[1459]: time="2025-01-13T21:22:30.782688492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 21:22:30.789869 containerd[1459]: time="2025-01-13T21:22:30.786739194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:22:30.796203 containerd[1459]: time="2025-01-13T21:22:30.796173933Z" level=info msg="CreateContainer within sandbox \"917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:22:30.818058 containerd[1459]: time="2025-01-13T21:22:30.817952007Z" level=info msg="CreateContainer within sandbox \"917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"57bc96568a5a3db22bd98961972354f5e5a455b9e00875d98347aec2b990d395\"" Jan 13 21:22:30.819535 containerd[1459]: time="2025-01-13T21:22:30.818668231Z" level=info msg="StartContainer for \"57bc96568a5a3db22bd98961972354f5e5a455b9e00875d98347aec2b990d395\"" Jan 13 21:22:30.850980 systemd[1]: Started cri-containerd-57bc96568a5a3db22bd98961972354f5e5a455b9e00875d98347aec2b990d395.scope - libcontainer container 57bc96568a5a3db22bd98961972354f5e5a455b9e00875d98347aec2b990d395. Jan 13 21:22:30.889911 systemd-networkd[1390]: cali143cac43127: Link UP Jan 13 21:22:30.893878 systemd-networkd[1390]: cali143cac43127: Gained carrier Jan 13 21:22:30.905296 containerd[1459]: time="2025-01-13T21:22:30.905238504Z" level=info msg="StartContainer for \"57bc96568a5a3db22bd98961972354f5e5a455b9e00875d98347aec2b990d395\" returns successfully" Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.806 [INFO][4555] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.815 [INFO][4555] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--d4pmk-eth0 csi-node-driver- calico-system 2faded03-4e90-4e3a-85c7-86d52abea6de 962 0 2025-01-13 21:22:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-d4pmk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali143cac43127 [] []}} ContainerID="16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" Namespace="calico-system" Pod="csi-node-driver-d4pmk" WorkloadEndpoint="localhost-k8s-csi--node--driver--d4pmk-" Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.815 [INFO][4555] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" Namespace="calico-system" Pod="csi-node-driver-d4pmk" WorkloadEndpoint="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.848 [INFO][4586] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" HandleID="k8s-pod-network.16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" Workload="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.857 [INFO][4586] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" HandleID="k8s-pod-network.16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" Workload="localhost-k8s-csi--node--driver--d4pmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-d4pmk", "timestamp":"2025-01-13 21:22:30.848484984 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.857 [INFO][4586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.857 [INFO][4586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.857 [INFO][4586] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.858 [INFO][4586] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" host="localhost" Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.862 [INFO][4586] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.866 [INFO][4586] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.867 [INFO][4586] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.869 [INFO][4586] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.869 [INFO][4586] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" host="localhost" Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.871 [INFO][4586] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66 Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.874 [INFO][4586] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" host="localhost" Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.879 [INFO][4586] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" host="localhost" Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.879 [INFO][4586] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" host="localhost" Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.879 [INFO][4586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:30.907791 containerd[1459]: 2025-01-13 21:22:30.879 [INFO][4586] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" HandleID="k8s-pod-network.16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" Workload="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:30.908936 containerd[1459]: 2025-01-13 21:22:30.883 [INFO][4555] cni-plugin/k8s.go 386: Populated endpoint ContainerID="16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" Namespace="calico-system" Pod="csi-node-driver-d4pmk" WorkloadEndpoint="localhost-k8s-csi--node--driver--d4pmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d4pmk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2faded03-4e90-4e3a-85c7-86d52abea6de", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-d4pmk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali143cac43127", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:30.908936 containerd[1459]: 2025-01-13 21:22:30.884 [INFO][4555] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" Namespace="calico-system" Pod="csi-node-driver-d4pmk" WorkloadEndpoint="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:30.908936 containerd[1459]: 2025-01-13 21:22:30.884 [INFO][4555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali143cac43127 ContainerID="16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" Namespace="calico-system" Pod="csi-node-driver-d4pmk" WorkloadEndpoint="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:30.908936 containerd[1459]: 2025-01-13 21:22:30.893 [INFO][4555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" Namespace="calico-system" Pod="csi-node-driver-d4pmk" WorkloadEndpoint="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:30.908936 containerd[1459]: 2025-01-13 21:22:30.893 [INFO][4555] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" Namespace="calico-system" Pod="csi-node-driver-d4pmk" WorkloadEndpoint="localhost-k8s-csi--node--driver--d4pmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d4pmk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2faded03-4e90-4e3a-85c7-86d52abea6de", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66", Pod:"csi-node-driver-d4pmk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali143cac43127", MAC:"66:d4:7f:fc:79:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:30.908936 containerd[1459]: 2025-01-13 21:22:30.905 [INFO][4555] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66" Namespace="calico-system" Pod="csi-node-driver-d4pmk" WorkloadEndpoint="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:30.916199 systemd-networkd[1390]: cali802430e37bc: Link UP Jan 13 21:22:30.917576 systemd-networkd[1390]: cali802430e37bc: Gained carrier Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.794 [INFO][4545] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.803 [INFO][4545] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0 calico-apiserver-64584b8b84- calico-apiserver 4b08e0a5-05ea-4ffe-b58b-456364b2d1ae 961 0 2025-01-13 21:22:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64584b8b84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-64584b8b84-2rfb7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali802430e37bc [] []}} ContainerID="87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-2rfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-" Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.803 [INFO][4545] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-2rfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.848 [INFO][4577] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" HandleID="k8s-pod-network.87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" Workload="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.857 [INFO][4577] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" HandleID="k8s-pod-network.87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" Workload="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c4a50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-64584b8b84-2rfb7", "timestamp":"2025-01-13 21:22:30.848619407 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.858 [INFO][4577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.879 [INFO][4577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.879 [INFO][4577] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.881 [INFO][4577] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" host="localhost" Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.884 [INFO][4577] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.889 [INFO][4577] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.891 [INFO][4577] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.894 [INFO][4577] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.894 [INFO][4577] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" host="localhost" Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.896 [INFO][4577] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.900 [INFO][4577] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" host="localhost" Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.909 [INFO][4577] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" host="localhost" Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.909 [INFO][4577] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" host="localhost" Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.909 [INFO][4577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:30.932277 containerd[1459]: 2025-01-13 21:22:30.909 [INFO][4577] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" HandleID="k8s-pod-network.87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" Workload="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:30.932841 containerd[1459]: 2025-01-13 21:22:30.913 [INFO][4545] cni-plugin/k8s.go 386: Populated endpoint ContainerID="87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-2rfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0", GenerateName:"calico-apiserver-64584b8b84-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b08e0a5-05ea-4ffe-b58b-456364b2d1ae", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64584b8b84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-64584b8b84-2rfb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali802430e37bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:30.932841 containerd[1459]: 2025-01-13 21:22:30.913 [INFO][4545] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-2rfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:30.932841 containerd[1459]: 2025-01-13 21:22:30.913 [INFO][4545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali802430e37bc ContainerID="87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-2rfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:30.932841 containerd[1459]: 2025-01-13 21:22:30.918 [INFO][4545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-2rfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:30.932841 containerd[1459]: 2025-01-13 21:22:30.918 [INFO][4545] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-2rfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0", GenerateName:"calico-apiserver-64584b8b84-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b08e0a5-05ea-4ffe-b58b-456364b2d1ae", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64584b8b84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e", Pod:"calico-apiserver-64584b8b84-2rfb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali802430e37bc", MAC:"a6:96:25:e4:e0:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:30.932841 containerd[1459]: 2025-01-13 21:22:30.929 [INFO][4545] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e" Namespace="calico-apiserver" Pod="calico-apiserver-64584b8b84-2rfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:30.933765 containerd[1459]: time="2025-01-13T21:22:30.933560983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:30.933765 containerd[1459]: time="2025-01-13T21:22:30.933702308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:30.933842 containerd[1459]: time="2025-01-13T21:22:30.933717216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:30.936513 containerd[1459]: time="2025-01-13T21:22:30.935193027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:30.955022 systemd[1]: Started cri-containerd-16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66.scope - libcontainer container 16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66. Jan 13 21:22:30.958988 containerd[1459]: time="2025-01-13T21:22:30.958900773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:30.959128 containerd[1459]: time="2025-01-13T21:22:30.959096320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:30.959243 containerd[1459]: time="2025-01-13T21:22:30.959199915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:30.959519 containerd[1459]: time="2025-01-13T21:22:30.959425548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:30.973577 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:22:30.987002 systemd[1]: Started cri-containerd-87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e.scope - libcontainer container 87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e. Jan 13 21:22:30.991307 containerd[1459]: time="2025-01-13T21:22:30.991265238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d4pmk,Uid:2faded03-4e90-4e3a-85c7-86d52abea6de,Namespace:calico-system,Attempt:1,} returns sandbox id \"16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66\"" Jan 13 21:22:31.000619 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:22:31.030059 containerd[1459]: time="2025-01-13T21:22:31.029994356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64584b8b84-2rfb7,Uid:4b08e0a5-05ea-4ffe-b58b-456364b2d1ae,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e\"" Jan 13 21:22:31.512031 systemd-networkd[1390]: calic32831cf31b: Gained IPv6LL Jan 13 21:22:31.642296 kubelet[2593]: E0113 21:22:31.642258 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:31.690273 kubelet[2593]: I0113 21:22:31.689906 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6496d4bbf-28tcw" podStartSLOduration=25.714355365 podStartE2EDuration="27.68988671s" podCreationTimestamp="2025-01-13 21:22:04 +0000 UTC" firstStartedPulling="2025-01-13 21:22:28.808586239 +0000 UTC m=+47.426116492" lastFinishedPulling="2025-01-13 21:22:30.784117584 +0000 UTC m=+49.401647837" observedRunningTime="2025-01-13 21:22:31.640084711 +0000 UTC m=+50.257614964" watchObservedRunningTime="2025-01-13 21:22:31.68988671 +0000 UTC m=+50.307416963" Jan 13 21:22:31.726554 kubelet[2593]: I0113 21:22:31.726494 2593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:22:31.727136 kubelet[2593]: E0113 21:22:31.727116 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:32.025080 systemd-networkd[1390]: cali802430e37bc: Gained IPv6LL Jan 13 21:22:32.407013 systemd-networkd[1390]: cali143cac43127: Gained IPv6LL Jan 13 21:22:32.458699 containerd[1459]: time="2025-01-13T21:22:32.458651094Z" level=info msg="StopPodSandbox for \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\"" Jan 13 21:22:32.552104 containerd[1459]: 2025-01-13 21:22:32.509 [INFO][4804] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:32.552104 containerd[1459]: 2025-01-13 21:22:32.509 [INFO][4804] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" iface="eth0" netns="/var/run/netns/cni-239bc3bd-e16e-a6ef-f26c-fd11aebf6e0f" Jan 13 21:22:32.552104 containerd[1459]: 2025-01-13 21:22:32.509 [INFO][4804] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" iface="eth0" netns="/var/run/netns/cni-239bc3bd-e16e-a6ef-f26c-fd11aebf6e0f" Jan 13 21:22:32.552104 containerd[1459]: 2025-01-13 21:22:32.509 [INFO][4804] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" iface="eth0" netns="/var/run/netns/cni-239bc3bd-e16e-a6ef-f26c-fd11aebf6e0f" Jan 13 21:22:32.552104 containerd[1459]: 2025-01-13 21:22:32.509 [INFO][4804] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:32.552104 containerd[1459]: 2025-01-13 21:22:32.509 [INFO][4804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:32.552104 containerd[1459]: 2025-01-13 21:22:32.538 [INFO][4816] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" HandleID="k8s-pod-network.70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Workload="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:32.552104 containerd[1459]: 2025-01-13 21:22:32.539 [INFO][4816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:32.552104 containerd[1459]: 2025-01-13 21:22:32.539 [INFO][4816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:32.552104 containerd[1459]: 2025-01-13 21:22:32.544 [WARNING][4816] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" HandleID="k8s-pod-network.70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Workload="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:32.552104 containerd[1459]: 2025-01-13 21:22:32.544 [INFO][4816] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" HandleID="k8s-pod-network.70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Workload="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:32.552104 containerd[1459]: 2025-01-13 21:22:32.545 [INFO][4816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:32.552104 containerd[1459]: 2025-01-13 21:22:32.549 [INFO][4804] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:32.553906 containerd[1459]: time="2025-01-13T21:22:32.553690356Z" level=info msg="TearDown network for sandbox \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\" successfully" Jan 13 21:22:32.553906 containerd[1459]: time="2025-01-13T21:22:32.553729229Z" level=info msg="StopPodSandbox for \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\" returns successfully" Jan 13 21:22:32.557573 kubelet[2593]: E0113 21:22:32.555160 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:32.558044 containerd[1459]: time="2025-01-13T21:22:32.555992306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zkpxx,Uid:b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7,Namespace:kube-system,Attempt:1,}" Jan 13 21:22:32.555612 systemd[1]: run-netns-cni\x2d239bc3bd\x2de16e\x2da6ef\x2df26c\x2dfd11aebf6e0f.mount: Deactivated successfully. Jan 13 21:22:32.625896 kernel: bpftool[4859]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:22:32.645642 kubelet[2593]: E0113 21:22:32.645602 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:32.699217 systemd-networkd[1390]: calie0fed7a1a96: Link UP Jan 13 21:22:32.700271 systemd-networkd[1390]: calie0fed7a1a96: Gained carrier Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.614 [INFO][4832] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0 coredns-7db6d8ff4d- kube-system b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7 995 0 2025-01-13 21:21:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-zkpxx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0fed7a1a96 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zkpxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zkpxx-" Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.615 [INFO][4832] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zkpxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.646 [INFO][4860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" HandleID="k8s-pod-network.b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" Workload="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.654 [INFO][4860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" HandleID="k8s-pod-network.b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" Workload="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-zkpxx", "timestamp":"2025-01-13 21:22:32.646432024 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.654 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.654 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.655 [INFO][4860] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.656 [INFO][4860] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" host="localhost" Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.659 [INFO][4860] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.662 [INFO][4860] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.666 [INFO][4860] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.668 [INFO][4860] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.669 [INFO][4860] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" host="localhost" Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.670 [INFO][4860] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126 Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.674 [INFO][4860] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" host="localhost" Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.687 [INFO][4860] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" host="localhost" Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.687 [INFO][4860] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" host="localhost" Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.687 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:32.724418 containerd[1459]: 2025-01-13 21:22:32.687 [INFO][4860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" HandleID="k8s-pod-network.b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" Workload="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:32.725065 containerd[1459]: 2025-01-13 21:22:32.691 [INFO][4832] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zkpxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-zkpxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0fed7a1a96", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:32.725065 containerd[1459]: 2025-01-13 21:22:32.691 [INFO][4832] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zkpxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:32.725065 containerd[1459]: 2025-01-13 21:22:32.692 [INFO][4832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0fed7a1a96 ContainerID="b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zkpxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:32.725065 containerd[1459]: 2025-01-13 21:22:32.700 [INFO][4832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zkpxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:32.725065 containerd[1459]: 2025-01-13 21:22:32.700 [INFO][4832] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zkpxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126", Pod:"coredns-7db6d8ff4d-zkpxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0fed7a1a96", MAC:"1a:06:2a:a7:99:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:32.725065 containerd[1459]: 2025-01-13 21:22:32.713 [INFO][4832] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zkpxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:32.728575 systemd[1]: Started sshd@15-10.0.0.66:22-10.0.0.1:54428.service - OpenSSH per-connection server daemon (10.0.0.1:54428). Jan 13 21:22:32.766385 containerd[1459]: time="2025-01-13T21:22:32.765316696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:32.767289 containerd[1459]: time="2025-01-13T21:22:32.766906299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:32.767289 containerd[1459]: time="2025-01-13T21:22:32.766938239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:32.767289 containerd[1459]: time="2025-01-13T21:22:32.767050350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:32.779978 sshd[4886]: Accepted publickey for core from 10.0.0.1 port 54428 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:32.778967 sshd[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:32.796314 systemd-logind[1443]: New session 16 of user core. Jan 13 21:22:32.806873 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:22:32.821038 systemd[1]: Started cri-containerd-b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126.scope - libcontainer container b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126. Jan 13 21:22:32.854955 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:22:32.938151 containerd[1459]: time="2025-01-13T21:22:32.938073290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zkpxx,Uid:b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7,Namespace:kube-system,Attempt:1,} returns sandbox id \"b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126\"" Jan 13 21:22:32.939458 kubelet[2593]: E0113 21:22:32.939393 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:32.942685 containerd[1459]: time="2025-01-13T21:22:32.942390923Z" level=info msg="CreateContainer within sandbox \"b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:22:33.069379 containerd[1459]: time="2025-01-13T21:22:33.069333612Z" level=info msg="CreateContainer within sandbox \"b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ab92d8000bfc077dcd9defb70e499a4973c8701ff81adde48245a841b6f5d07\"" Jan 13 21:22:33.070719 containerd[1459]: time="2025-01-13T21:22:33.070684726Z" level=info msg="StartContainer for \"4ab92d8000bfc077dcd9defb70e499a4973c8701ff81adde48245a841b6f5d07\"" Jan 13 21:22:33.122603 sshd[4886]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:33.123522 systemd-networkd[1390]: vxlan.calico: Link UP Jan 13 21:22:33.123576 systemd-networkd[1390]: vxlan.calico: Gained carrier Jan 13 21:22:33.127109 systemd[1]: Started cri-containerd-4ab92d8000bfc077dcd9defb70e499a4973c8701ff81adde48245a841b6f5d07.scope - libcontainer container 4ab92d8000bfc077dcd9defb70e499a4973c8701ff81adde48245a841b6f5d07. Jan 13 21:22:33.128283 systemd[1]: sshd@15-10.0.0.66:22-10.0.0.1:54428.service: Deactivated successfully. Jan 13 21:22:33.130949 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:22:33.132024 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:22:33.136760 systemd-logind[1443]: Removed session 16. Jan 13 21:22:33.173373 containerd[1459]: time="2025-01-13T21:22:33.173261474Z" level=info msg="StartContainer for \"4ab92d8000bfc077dcd9defb70e499a4973c8701ff81adde48245a841b6f5d07\" returns successfully" Jan 13 21:22:33.473143 containerd[1459]: time="2025-01-13T21:22:33.473100750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:33.473905 containerd[1459]: time="2025-01-13T21:22:33.473871456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 21:22:33.475199 containerd[1459]: time="2025-01-13T21:22:33.475162388Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:33.477414 containerd[1459]: time="2025-01-13T21:22:33.477365593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:33.478060 containerd[1459]: time="2025-01-13T21:22:33.478014461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.691237376s" Jan 13 21:22:33.478060 containerd[1459]: time="2025-01-13T21:22:33.478052302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:22:33.479707 containerd[1459]: time="2025-01-13T21:22:33.479516851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:22:33.480528 containerd[1459]: time="2025-01-13T21:22:33.480486600Z" level=info msg="CreateContainer within sandbox \"c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:22:33.505031 containerd[1459]: time="2025-01-13T21:22:33.504979594Z" level=info msg="CreateContainer within sandbox \"c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6c128c20a915f0c745f448af86ff2aa943458e759aa2d349000609527619da2d\"" Jan 13 21:22:33.506562 containerd[1459]: time="2025-01-13T21:22:33.505561787Z" level=info msg="StartContainer for \"6c128c20a915f0c745f448af86ff2aa943458e759aa2d349000609527619da2d\"" Jan 13 21:22:33.537017 systemd[1]: Started cri-containerd-6c128c20a915f0c745f448af86ff2aa943458e759aa2d349000609527619da2d.scope - libcontainer container 6c128c20a915f0c745f448af86ff2aa943458e759aa2d349000609527619da2d. Jan 13 21:22:33.574552 containerd[1459]: time="2025-01-13T21:22:33.574499707Z" level=info msg="StartContainer for \"6c128c20a915f0c745f448af86ff2aa943458e759aa2d349000609527619da2d\" returns successfully" Jan 13 21:22:33.649878 kubelet[2593]: E0113 21:22:33.649836 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:33.656648 kubelet[2593]: I0113 21:22:33.656589 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64584b8b84-x9bt6" podStartSLOduration=25.930831042 podStartE2EDuration="29.656573148s" podCreationTimestamp="2025-01-13 21:22:04 +0000 UTC" firstStartedPulling="2025-01-13 21:22:29.753549492 +0000 UTC m=+48.371079745" lastFinishedPulling="2025-01-13 21:22:33.479291598 +0000 UTC m=+52.096821851" observedRunningTime="2025-01-13 21:22:33.656363455 +0000 UTC m=+52.273893708" watchObservedRunningTime="2025-01-13 21:22:33.656573148 +0000 UTC m=+52.274103401" Jan 13 21:22:33.667194 kubelet[2593]: I0113 21:22:33.667114 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zkpxx" podStartSLOduration=37.667095515 podStartE2EDuration="37.667095515s" podCreationTimestamp="2025-01-13 21:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:22:33.667013802 +0000 UTC m=+52.284544055" watchObservedRunningTime="2025-01-13 21:22:33.667095515 +0000 UTC m=+52.284625768" Jan 13 21:22:34.519089 systemd-networkd[1390]: calie0fed7a1a96: Gained IPv6LL Jan 13 21:22:34.647014 systemd-networkd[1390]: vxlan.calico: Gained IPv6LL Jan 13 21:22:34.651055 kubelet[2593]: I0113 21:22:34.651029 2593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:22:34.651791 kubelet[2593]: E0113 21:22:34.651772 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:34.869894 containerd[1459]: time="2025-01-13T21:22:34.869767518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:34.870534 containerd[1459]: time="2025-01-13T21:22:34.870468253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:22:34.871789 containerd[1459]: time="2025-01-13T21:22:34.871760448Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:34.873661 containerd[1459]: time="2025-01-13T21:22:34.873633042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:34.874316 containerd[1459]: time="2025-01-13T21:22:34.874293441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.394742687s" Jan 13 21:22:34.874357 containerd[1459]: time="2025-01-13T21:22:34.874321233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:22:34.875239 containerd[1459]: time="2025-01-13T21:22:34.875204661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:22:34.877008 containerd[1459]: time="2025-01-13T21:22:34.876983549Z" level=info msg="CreateContainer within sandbox \"16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:22:34.897178 containerd[1459]: time="2025-01-13T21:22:34.897125747Z" level=info msg="CreateContainer within sandbox \"16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"01e5f4a1b45cbc29566e087c130d574ad39d906247e50a7e3151763f0591de12\"" Jan 13 21:22:34.897773 containerd[1459]: time="2025-01-13T21:22:34.897733206Z" level=info msg="StartContainer for \"01e5f4a1b45cbc29566e087c130d574ad39d906247e50a7e3151763f0591de12\"" Jan 13 21:22:34.940276 systemd[1]: Started cri-containerd-01e5f4a1b45cbc29566e087c130d574ad39d906247e50a7e3151763f0591de12.scope - libcontainer container 01e5f4a1b45cbc29566e087c130d574ad39d906247e50a7e3151763f0591de12. Jan 13 21:22:35.070480 containerd[1459]: time="2025-01-13T21:22:35.070420217Z" level=info msg="StartContainer for \"01e5f4a1b45cbc29566e087c130d574ad39d906247e50a7e3151763f0591de12\" returns successfully" Jan 13 21:22:35.261136 containerd[1459]: time="2025-01-13T21:22:35.261085956Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:35.261840 containerd[1459]: time="2025-01-13T21:22:35.261796207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 21:22:35.263809 containerd[1459]: time="2025-01-13T21:22:35.263773608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 388.539341ms" Jan 13 21:22:35.263809 containerd[1459]: time="2025-01-13T21:22:35.263804477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:22:35.264705 containerd[1459]: time="2025-01-13T21:22:35.264674028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:22:35.266081 containerd[1459]: time="2025-01-13T21:22:35.266054558Z" level=info msg="CreateContainer within sandbox \"87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:22:35.280939 containerd[1459]: time="2025-01-13T21:22:35.280897702Z" level=info msg="CreateContainer within sandbox \"87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"77261dae10303c87d8861040c1fc34bc2c27b6b99c76e402acd1a001817afb6a\"" Jan 13 21:22:35.281282 containerd[1459]: time="2025-01-13T21:22:35.281251455Z" level=info msg="StartContainer for \"77261dae10303c87d8861040c1fc34bc2c27b6b99c76e402acd1a001817afb6a\"" Jan 13 21:22:35.312992 systemd[1]: Started cri-containerd-77261dae10303c87d8861040c1fc34bc2c27b6b99c76e402acd1a001817afb6a.scope - libcontainer container 77261dae10303c87d8861040c1fc34bc2c27b6b99c76e402acd1a001817afb6a. Jan 13 21:22:35.351459 containerd[1459]: time="2025-01-13T21:22:35.351417249Z" level=info msg="StartContainer for \"77261dae10303c87d8861040c1fc34bc2c27b6b99c76e402acd1a001817afb6a\" returns successfully" Jan 13 21:22:35.660503 kubelet[2593]: E0113 21:22:35.660386 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:36.661722 kubelet[2593]: I0113 21:22:36.661690 2593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:22:36.662780 kubelet[2593]: E0113 21:22:36.662715 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:36.738187 containerd[1459]: time="2025-01-13T21:22:36.738124021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:36.739107 containerd[1459]: time="2025-01-13T21:22:36.739054838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:22:36.740389 containerd[1459]: time="2025-01-13T21:22:36.740361680Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:36.742710 containerd[1459]: time="2025-01-13T21:22:36.742677425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:36.743418 containerd[1459]: time="2025-01-13T21:22:36.743363853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.478652674s" Jan 13 21:22:36.743418 containerd[1459]: time="2025-01-13T21:22:36.743400892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:22:36.745747 containerd[1459]: time="2025-01-13T21:22:36.745716097Z" level=info msg="CreateContainer within sandbox \"16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:22:36.759228 containerd[1459]: time="2025-01-13T21:22:36.759189789Z" level=info msg="CreateContainer within sandbox \"16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d42fcd75ab70bdd3c6e66e81ed166d922b6eaf64b48b7c0475c6d93520b217c9\"" Jan 13 21:22:36.759606 containerd[1459]: time="2025-01-13T21:22:36.759577807Z" level=info msg="StartContainer for \"d42fcd75ab70bdd3c6e66e81ed166d922b6eaf64b48b7c0475c6d93520b217c9\"" Jan 13 21:22:36.793994 systemd[1]: Started cri-containerd-d42fcd75ab70bdd3c6e66e81ed166d922b6eaf64b48b7c0475c6d93520b217c9.scope - libcontainer container d42fcd75ab70bdd3c6e66e81ed166d922b6eaf64b48b7c0475c6d93520b217c9. Jan 13 21:22:36.825783 containerd[1459]: time="2025-01-13T21:22:36.825731517Z" level=info msg="StartContainer for \"d42fcd75ab70bdd3c6e66e81ed166d922b6eaf64b48b7c0475c6d93520b217c9\" returns successfully" Jan 13 21:22:37.525019 kubelet[2593]: I0113 21:22:37.524983 2593 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:22:37.525019 kubelet[2593]: I0113 21:22:37.525013 2593 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:22:37.675741 kubelet[2593]: I0113 21:22:37.674803 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64584b8b84-2rfb7" podStartSLOduration=29.441542924 podStartE2EDuration="33.674787783s" podCreationTimestamp="2025-01-13 21:22:04 +0000 UTC" firstStartedPulling="2025-01-13 21:22:31.031245885 +0000 UTC m=+49.648776138" lastFinishedPulling="2025-01-13 21:22:35.264490744 +0000 UTC m=+53.882020997" observedRunningTime="2025-01-13 21:22:35.667963739 +0000 UTC m=+54.285493992" watchObservedRunningTime="2025-01-13 21:22:37.674787783 +0000 UTC m=+56.292318036" Jan 13 21:22:37.675741 kubelet[2593]: I0113 21:22:37.675003 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-d4pmk" podStartSLOduration=27.923367722 podStartE2EDuration="33.675000652s" podCreationTimestamp="2025-01-13 21:22:04 +0000 UTC" firstStartedPulling="2025-01-13 21:22:30.99257681 +0000 UTC m=+49.610107063" lastFinishedPulling="2025-01-13 21:22:36.74420974 +0000 UTC m=+55.361739993" observedRunningTime="2025-01-13 21:22:37.674183029 +0000 UTC m=+56.291713282" watchObservedRunningTime="2025-01-13 21:22:37.675000652 +0000 UTC m=+56.292530906" Jan 13 21:22:38.133850 systemd[1]: Started sshd@16-10.0.0.66:22-10.0.0.1:40042.service - OpenSSH per-connection server daemon (10.0.0.1:40042). Jan 13 21:22:38.170148 sshd[5289]: Accepted publickey for core from 10.0.0.1 port 40042 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:38.171769 sshd[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:38.175486 systemd-logind[1443]: New session 17 of user core. Jan 13 21:22:38.184990 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:22:38.299180 sshd[5289]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:38.303480 systemd[1]: sshd@16-10.0.0.66:22-10.0.0.1:40042.service: Deactivated successfully. Jan 13 21:22:38.305590 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:22:38.306282 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:22:38.307324 systemd-logind[1443]: Removed session 17. Jan 13 21:22:41.442780 containerd[1459]: time="2025-01-13T21:22:41.442739194Z" level=info msg="StopPodSandbox for \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\"" Jan 13 21:22:41.508179 containerd[1459]: 2025-01-13 21:22:41.476 [WARNING][5326] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126", Pod:"coredns-7db6d8ff4d-zkpxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0fed7a1a96", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:41.508179 containerd[1459]: 2025-01-13 21:22:41.476 [INFO][5326] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:41.508179 containerd[1459]: 2025-01-13 21:22:41.477 [INFO][5326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" iface="eth0" netns="" Jan 13 21:22:41.508179 containerd[1459]: 2025-01-13 21:22:41.477 [INFO][5326] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:41.508179 containerd[1459]: 2025-01-13 21:22:41.477 [INFO][5326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:41.508179 containerd[1459]: 2025-01-13 21:22:41.497 [INFO][5335] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" HandleID="k8s-pod-network.70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Workload="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:41.508179 containerd[1459]: 2025-01-13 21:22:41.497 [INFO][5335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:41.508179 containerd[1459]: 2025-01-13 21:22:41.497 [INFO][5335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:41.508179 containerd[1459]: 2025-01-13 21:22:41.502 [WARNING][5335] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" HandleID="k8s-pod-network.70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Workload="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:41.508179 containerd[1459]: 2025-01-13 21:22:41.502 [INFO][5335] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" HandleID="k8s-pod-network.70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Workload="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:41.508179 containerd[1459]: 2025-01-13 21:22:41.503 [INFO][5335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:41.508179 containerd[1459]: 2025-01-13 21:22:41.505 [INFO][5326] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:41.508832 containerd[1459]: time="2025-01-13T21:22:41.508198889Z" level=info msg="TearDown network for sandbox \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\" successfully" Jan 13 21:22:41.508832 containerd[1459]: time="2025-01-13T21:22:41.508223986Z" level=info msg="StopPodSandbox for \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\" returns successfully" Jan 13 21:22:41.514788 containerd[1459]: time="2025-01-13T21:22:41.514751155Z" level=info msg="RemovePodSandbox for \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\"" Jan 13 21:22:41.516944 containerd[1459]: time="2025-01-13T21:22:41.516919207Z" level=info msg="Forcibly stopping sandbox \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\"" Jan 13 21:22:41.578072 containerd[1459]: 2025-01-13 21:22:41.549 [WARNING][5358] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b8ee4f35-b1c3-4ecd-b94b-0ab75b2b3ba7", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9fa158ab3e3b5ad04a93709bb00b428cebbf5205677881a269cf4a59ae94126", Pod:"coredns-7db6d8ff4d-zkpxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0fed7a1a96", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:41.578072 containerd[1459]: 2025-01-13 21:22:41.549 [INFO][5358] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:41.578072 containerd[1459]: 2025-01-13 21:22:41.549 [INFO][5358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" iface="eth0" netns="" Jan 13 21:22:41.578072 containerd[1459]: 2025-01-13 21:22:41.549 [INFO][5358] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:41.578072 containerd[1459]: 2025-01-13 21:22:41.549 [INFO][5358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:41.578072 containerd[1459]: 2025-01-13 21:22:41.568 [INFO][5365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" HandleID="k8s-pod-network.70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Workload="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:41.578072 containerd[1459]: 2025-01-13 21:22:41.568 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:41.578072 containerd[1459]: 2025-01-13 21:22:41.568 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:41.578072 containerd[1459]: 2025-01-13 21:22:41.572 [WARNING][5365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" HandleID="k8s-pod-network.70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Workload="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:41.578072 containerd[1459]: 2025-01-13 21:22:41.572 [INFO][5365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" HandleID="k8s-pod-network.70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Workload="localhost-k8s-coredns--7db6d8ff4d--zkpxx-eth0" Jan 13 21:22:41.578072 containerd[1459]: 2025-01-13 21:22:41.573 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:41.578072 containerd[1459]: 2025-01-13 21:22:41.575 [INFO][5358] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1" Jan 13 21:22:41.578724 containerd[1459]: time="2025-01-13T21:22:41.578662503Z" level=info msg="TearDown network for sandbox \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\" successfully" Jan 13 21:22:41.644505 containerd[1459]: time="2025-01-13T21:22:41.644462367Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:22:41.644619 containerd[1459]: time="2025-01-13T21:22:41.644544812Z" level=info msg="RemovePodSandbox \"70ef73657693374908151f50cb144716575eb852d148b3d6c0d3f3ac53e669b1\" returns successfully" Jan 13 21:22:41.645265 containerd[1459]: time="2025-01-13T21:22:41.644939675Z" level=info msg="StopPodSandbox for \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\"" Jan 13 21:22:41.717346 containerd[1459]: 2025-01-13 21:22:41.683 [WARNING][5389] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"95694724-301a-4ddf-b650-581e197892dc", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2", Pod:"coredns-7db6d8ff4d-wv2tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f49869b38f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:41.717346 containerd[1459]: 2025-01-13 21:22:41.683 [INFO][5389] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:41.717346 containerd[1459]: 2025-01-13 21:22:41.683 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" iface="eth0" netns="" Jan 13 21:22:41.717346 containerd[1459]: 2025-01-13 21:22:41.683 [INFO][5389] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:41.717346 containerd[1459]: 2025-01-13 21:22:41.683 [INFO][5389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:41.717346 containerd[1459]: 2025-01-13 21:22:41.704 [INFO][5397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" HandleID="k8s-pod-network.be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Workload="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:41.717346 containerd[1459]: 2025-01-13 21:22:41.704 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:41.717346 containerd[1459]: 2025-01-13 21:22:41.704 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:41.717346 containerd[1459]: 2025-01-13 21:22:41.710 [WARNING][5397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" HandleID="k8s-pod-network.be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Workload="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:41.717346 containerd[1459]: 2025-01-13 21:22:41.710 [INFO][5397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" HandleID="k8s-pod-network.be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Workload="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:41.717346 containerd[1459]: 2025-01-13 21:22:41.711 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:41.717346 containerd[1459]: 2025-01-13 21:22:41.714 [INFO][5389] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:41.717905 containerd[1459]: time="2025-01-13T21:22:41.717374683Z" level=info msg="TearDown network for sandbox \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\" successfully" Jan 13 21:22:41.717905 containerd[1459]: time="2025-01-13T21:22:41.717408035Z" level=info msg="StopPodSandbox for \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\" returns successfully" Jan 13 21:22:41.717975 containerd[1459]: time="2025-01-13T21:22:41.717944455Z" level=info msg="RemovePodSandbox for \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\"" Jan 13 21:22:41.718009 containerd[1459]: time="2025-01-13T21:22:41.717978269Z" level=info msg="Forcibly stopping sandbox \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\"" Jan 13 21:22:41.798582 containerd[1459]: 2025-01-13 21:22:41.762 [WARNING][5420] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"95694724-301a-4ddf-b650-581e197892dc", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb453e81227b71dd4dbd79f6475f274a33c0cbd35d21f2060fa0d91ddb0cd3a2", Pod:"coredns-7db6d8ff4d-wv2tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f49869b38f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:41.798582 containerd[1459]: 2025-01-13 21:22:41.764 [INFO][5420] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:41.798582 containerd[1459]: 2025-01-13 21:22:41.764 [INFO][5420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" iface="eth0" netns="" Jan 13 21:22:41.798582 containerd[1459]: 2025-01-13 21:22:41.764 [INFO][5420] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:41.798582 containerd[1459]: 2025-01-13 21:22:41.764 [INFO][5420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:41.798582 containerd[1459]: 2025-01-13 21:22:41.786 [INFO][5428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" HandleID="k8s-pod-network.be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Workload="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:41.798582 containerd[1459]: 2025-01-13 21:22:41.786 [INFO][5428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:41.798582 containerd[1459]: 2025-01-13 21:22:41.786 [INFO][5428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:41.798582 containerd[1459]: 2025-01-13 21:22:41.791 [WARNING][5428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" HandleID="k8s-pod-network.be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Workload="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:41.798582 containerd[1459]: 2025-01-13 21:22:41.791 [INFO][5428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" HandleID="k8s-pod-network.be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Workload="localhost-k8s-coredns--7db6d8ff4d--wv2tn-eth0" Jan 13 21:22:41.798582 containerd[1459]: 2025-01-13 21:22:41.793 [INFO][5428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:41.798582 containerd[1459]: 2025-01-13 21:22:41.795 [INFO][5420] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1" Jan 13 21:22:41.799017 containerd[1459]: time="2025-01-13T21:22:41.798625589Z" level=info msg="TearDown network for sandbox \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\" successfully" Jan 13 21:22:41.804549 containerd[1459]: time="2025-01-13T21:22:41.804487777Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:22:41.804684 containerd[1459]: time="2025-01-13T21:22:41.804561585Z" level=info msg="RemovePodSandbox \"be1ef70b1e5cf86bbcfe26f6e69d6b909a140fc13969029bd24626a542c9e9a1\" returns successfully" Jan 13 21:22:41.805004 containerd[1459]: time="2025-01-13T21:22:41.804975665Z" level=info msg="StopPodSandbox for \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\"" Jan 13 21:22:41.880628 containerd[1459]: 2025-01-13 21:22:41.844 [WARNING][5451] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0", GenerateName:"calico-apiserver-64584b8b84-", Namespace:"calico-apiserver", SelfLink:"", UID:"c65feba5-e029-4c69-b7ee-c32a5deacfc3", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64584b8b84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58", Pod:"calico-apiserver-64584b8b84-x9bt6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic32831cf31b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:41.880628 containerd[1459]: 2025-01-13 21:22:41.844 [INFO][5451] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:41.880628 containerd[1459]: 2025-01-13 21:22:41.844 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" iface="eth0" netns="" Jan 13 21:22:41.880628 containerd[1459]: 2025-01-13 21:22:41.844 [INFO][5451] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:41.880628 containerd[1459]: 2025-01-13 21:22:41.844 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:41.880628 containerd[1459]: 2025-01-13 21:22:41.868 [INFO][5458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" HandleID="k8s-pod-network.04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Workload="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:41.880628 containerd[1459]: 2025-01-13 21:22:41.868 [INFO][5458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:41.880628 containerd[1459]: 2025-01-13 21:22:41.868 [INFO][5458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:41.880628 containerd[1459]: 2025-01-13 21:22:41.873 [WARNING][5458] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" HandleID="k8s-pod-network.04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Workload="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:41.880628 containerd[1459]: 2025-01-13 21:22:41.874 [INFO][5458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" HandleID="k8s-pod-network.04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Workload="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:41.880628 containerd[1459]: 2025-01-13 21:22:41.875 [INFO][5458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:41.880628 containerd[1459]: 2025-01-13 21:22:41.877 [INFO][5451] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:41.881175 containerd[1459]: time="2025-01-13T21:22:41.880669648Z" level=info msg="TearDown network for sandbox \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\" successfully" Jan 13 21:22:41.881175 containerd[1459]: time="2025-01-13T21:22:41.880699725Z" level=info msg="StopPodSandbox for \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\" returns successfully" Jan 13 21:22:41.881305 containerd[1459]: time="2025-01-13T21:22:41.881257185Z" level=info msg="RemovePodSandbox for \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\"" Jan 13 21:22:41.881305 containerd[1459]: time="2025-01-13T21:22:41.881298011Z" level=info msg="Forcibly stopping sandbox \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\"" Jan 13 21:22:41.947573 containerd[1459]: 2025-01-13 21:22:41.917 [WARNING][5482] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0", GenerateName:"calico-apiserver-64584b8b84-", Namespace:"calico-apiserver", SelfLink:"", UID:"c65feba5-e029-4c69-b7ee-c32a5deacfc3", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64584b8b84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c29c5cb840be94c567bdf802a9f499b754f854537723efbab1c05b1ed0474b58", Pod:"calico-apiserver-64584b8b84-x9bt6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic32831cf31b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:41.947573 containerd[1459]: 2025-01-13 21:22:41.917 [INFO][5482] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:41.947573 containerd[1459]: 2025-01-13 21:22:41.917 [INFO][5482] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" iface="eth0" netns="" Jan 13 21:22:41.947573 containerd[1459]: 2025-01-13 21:22:41.917 [INFO][5482] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:41.947573 containerd[1459]: 2025-01-13 21:22:41.917 [INFO][5482] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:41.947573 containerd[1459]: 2025-01-13 21:22:41.936 [INFO][5489] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" HandleID="k8s-pod-network.04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Workload="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:41.947573 containerd[1459]: 2025-01-13 21:22:41.936 [INFO][5489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:41.947573 containerd[1459]: 2025-01-13 21:22:41.936 [INFO][5489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:41.947573 containerd[1459]: 2025-01-13 21:22:41.941 [WARNING][5489] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" HandleID="k8s-pod-network.04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Workload="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:41.947573 containerd[1459]: 2025-01-13 21:22:41.941 [INFO][5489] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" HandleID="k8s-pod-network.04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Workload="localhost-k8s-calico--apiserver--64584b8b84--x9bt6-eth0" Jan 13 21:22:41.947573 containerd[1459]: 2025-01-13 21:22:41.942 [INFO][5489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:41.947573 containerd[1459]: 2025-01-13 21:22:41.944 [INFO][5482] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a" Jan 13 21:22:41.948375 containerd[1459]: time="2025-01-13T21:22:41.947615610Z" level=info msg="TearDown network for sandbox \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\" successfully" Jan 13 21:22:41.951799 containerd[1459]: time="2025-01-13T21:22:41.951762899Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:22:41.951873 containerd[1459]: time="2025-01-13T21:22:41.951840425Z" level=info msg="RemovePodSandbox \"04a9bcc9f3f29905af0c727f443179c7575d1b7517a9bd032b7d341205c8f68a\" returns successfully" Jan 13 21:22:41.952395 containerd[1459]: time="2025-01-13T21:22:41.952362508Z" level=info msg="StopPodSandbox for \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\"" Jan 13 21:22:42.015094 containerd[1459]: 2025-01-13 21:22:41.984 [WARNING][5512] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0", GenerateName:"calico-apiserver-64584b8b84-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b08e0a5-05ea-4ffe-b58b-456364b2d1ae", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64584b8b84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e", Pod:"calico-apiserver-64584b8b84-2rfb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali802430e37bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:42.015094 containerd[1459]: 2025-01-13 21:22:41.985 [INFO][5512] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:42.015094 containerd[1459]: 2025-01-13 21:22:41.985 [INFO][5512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" iface="eth0" netns="" Jan 13 21:22:42.015094 containerd[1459]: 2025-01-13 21:22:41.985 [INFO][5512] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:42.015094 containerd[1459]: 2025-01-13 21:22:41.985 [INFO][5512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:42.015094 containerd[1459]: 2025-01-13 21:22:42.003 [INFO][5519] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" HandleID="k8s-pod-network.459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Workload="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:42.015094 containerd[1459]: 2025-01-13 21:22:42.003 [INFO][5519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:42.015094 containerd[1459]: 2025-01-13 21:22:42.003 [INFO][5519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:42.015094 containerd[1459]: 2025-01-13 21:22:42.008 [WARNING][5519] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" HandleID="k8s-pod-network.459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Workload="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:42.015094 containerd[1459]: 2025-01-13 21:22:42.008 [INFO][5519] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" HandleID="k8s-pod-network.459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Workload="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:42.015094 containerd[1459]: 2025-01-13 21:22:42.010 [INFO][5519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:42.015094 containerd[1459]: 2025-01-13 21:22:42.012 [INFO][5512] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:42.015612 containerd[1459]: time="2025-01-13T21:22:42.015130225Z" level=info msg="TearDown network for sandbox \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\" successfully" Jan 13 21:22:42.015612 containerd[1459]: time="2025-01-13T21:22:42.015164521Z" level=info msg="StopPodSandbox for \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\" returns successfully" Jan 13 21:22:42.016344 containerd[1459]: time="2025-01-13T21:22:42.015890918Z" level=info msg="RemovePodSandbox for \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\"" Jan 13 21:22:42.016344 containerd[1459]: time="2025-01-13T21:22:42.015946546Z" level=info msg="Forcibly stopping sandbox \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\"" Jan 13 21:22:42.079120 containerd[1459]: 2025-01-13 21:22:42.049 [WARNING][5541] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0", GenerateName:"calico-apiserver-64584b8b84-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b08e0a5-05ea-4ffe-b58b-456364b2d1ae", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64584b8b84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87e3785e7a02acd08f5dbc7e2e718b897ff2802dc2a0ae0afa673a1af05ef55e", Pod:"calico-apiserver-64584b8b84-2rfb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali802430e37bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:42.079120 containerd[1459]: 2025-01-13 21:22:42.049 [INFO][5541] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:42.079120 containerd[1459]: 2025-01-13 21:22:42.049 [INFO][5541] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" iface="eth0" netns="" Jan 13 21:22:42.079120 containerd[1459]: 2025-01-13 21:22:42.049 [INFO][5541] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:42.079120 containerd[1459]: 2025-01-13 21:22:42.049 [INFO][5541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:42.079120 containerd[1459]: 2025-01-13 21:22:42.068 [INFO][5548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" HandleID="k8s-pod-network.459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Workload="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:42.079120 containerd[1459]: 2025-01-13 21:22:42.068 [INFO][5548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:42.079120 containerd[1459]: 2025-01-13 21:22:42.069 [INFO][5548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:42.079120 containerd[1459]: 2025-01-13 21:22:42.073 [WARNING][5548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" HandleID="k8s-pod-network.459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Workload="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:42.079120 containerd[1459]: 2025-01-13 21:22:42.073 [INFO][5548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" HandleID="k8s-pod-network.459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Workload="localhost-k8s-calico--apiserver--64584b8b84--2rfb7-eth0" Jan 13 21:22:42.079120 containerd[1459]: 2025-01-13 21:22:42.074 [INFO][5548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:42.079120 containerd[1459]: 2025-01-13 21:22:42.076 [INFO][5541] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53" Jan 13 21:22:42.079526 containerd[1459]: time="2025-01-13T21:22:42.079160167Z" level=info msg="TearDown network for sandbox \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\" successfully" Jan 13 21:22:42.082925 containerd[1459]: time="2025-01-13T21:22:42.082827778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:22:42.082925 containerd[1459]: time="2025-01-13T21:22:42.082902013Z" level=info msg="RemovePodSandbox \"459d16794f6d17fab193b8cd9bb9c0603b8e0f975c9fdaafc3d4bd97526a3c53\" returns successfully" Jan 13 21:22:42.083481 containerd[1459]: time="2025-01-13T21:22:42.083440726Z" level=info msg="StopPodSandbox for \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\"" Jan 13 21:22:42.148081 containerd[1459]: 2025-01-13 21:22:42.116 [WARNING][5570] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d4pmk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2faded03-4e90-4e3a-85c7-86d52abea6de", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66", Pod:"csi-node-driver-d4pmk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali143cac43127", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:42.148081 containerd[1459]: 2025-01-13 21:22:42.116 [INFO][5570] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:42.148081 containerd[1459]: 2025-01-13 21:22:42.116 [INFO][5570] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" iface="eth0" netns="" Jan 13 21:22:42.148081 containerd[1459]: 2025-01-13 21:22:42.116 [INFO][5570] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:42.148081 containerd[1459]: 2025-01-13 21:22:42.116 [INFO][5570] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:42.148081 containerd[1459]: 2025-01-13 21:22:42.136 [INFO][5577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" HandleID="k8s-pod-network.1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Workload="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:42.148081 containerd[1459]: 2025-01-13 21:22:42.136 [INFO][5577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:42.148081 containerd[1459]: 2025-01-13 21:22:42.136 [INFO][5577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:42.148081 containerd[1459]: 2025-01-13 21:22:42.141 [WARNING][5577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" HandleID="k8s-pod-network.1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Workload="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:42.148081 containerd[1459]: 2025-01-13 21:22:42.141 [INFO][5577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" HandleID="k8s-pod-network.1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Workload="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:42.148081 containerd[1459]: 2025-01-13 21:22:42.143 [INFO][5577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:42.148081 containerd[1459]: 2025-01-13 21:22:42.145 [INFO][5570] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:42.148631 containerd[1459]: time="2025-01-13T21:22:42.148107361Z" level=info msg="TearDown network for sandbox \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\" successfully" Jan 13 21:22:42.148631 containerd[1459]: time="2025-01-13T21:22:42.148134363Z" level=info msg="StopPodSandbox for \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\" returns successfully" Jan 13 21:22:42.148631 containerd[1459]: time="2025-01-13T21:22:42.148614282Z" level=info msg="RemovePodSandbox for \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\"" Jan 13 21:22:42.148824 containerd[1459]: time="2025-01-13T21:22:42.148638179Z" level=info msg="Forcibly stopping sandbox \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\"" Jan 13 21:22:42.214430 containerd[1459]: 2025-01-13 21:22:42.184 [WARNING][5599] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d4pmk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2faded03-4e90-4e3a-85c7-86d52abea6de", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16e61f7d8eb352c760b6c223e81ee7d3507e7fbc11af0b547ac6ef6963d8de66", Pod:"csi-node-driver-d4pmk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali143cac43127", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:42.214430 containerd[1459]: 2025-01-13 21:22:42.185 [INFO][5599] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:42.214430 containerd[1459]: 2025-01-13 21:22:42.185 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" iface="eth0" netns="" Jan 13 21:22:42.214430 containerd[1459]: 2025-01-13 21:22:42.185 [INFO][5599] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:42.214430 containerd[1459]: 2025-01-13 21:22:42.185 [INFO][5599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:42.214430 containerd[1459]: 2025-01-13 21:22:42.203 [INFO][5607] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" HandleID="k8s-pod-network.1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Workload="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:42.214430 containerd[1459]: 2025-01-13 21:22:42.203 [INFO][5607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:42.214430 containerd[1459]: 2025-01-13 21:22:42.203 [INFO][5607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:42.214430 containerd[1459]: 2025-01-13 21:22:42.209 [WARNING][5607] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" HandleID="k8s-pod-network.1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Workload="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:42.214430 containerd[1459]: 2025-01-13 21:22:42.209 [INFO][5607] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" HandleID="k8s-pod-network.1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Workload="localhost-k8s-csi--node--driver--d4pmk-eth0" Jan 13 21:22:42.214430 containerd[1459]: 2025-01-13 21:22:42.210 [INFO][5607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:42.214430 containerd[1459]: 2025-01-13 21:22:42.212 [INFO][5599] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e" Jan 13 21:22:42.214883 containerd[1459]: time="2025-01-13T21:22:42.214467174Z" level=info msg="TearDown network for sandbox \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\" successfully" Jan 13 21:22:42.256780 containerd[1459]: time="2025-01-13T21:22:42.256716673Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:22:42.256780 containerd[1459]: time="2025-01-13T21:22:42.256783583Z" level=info msg="RemovePodSandbox \"1f8a7803e448676d8062089ba8d85452b8db1e7ffc3a301a81a3fdf42662505e\" returns successfully" Jan 13 21:22:42.257317 containerd[1459]: time="2025-01-13T21:22:42.257278791Z" level=info msg="StopPodSandbox for \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\"" Jan 13 21:22:42.351036 containerd[1459]: 2025-01-13 21:22:42.320 [WARNING][5630] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0", GenerateName:"calico-kube-controllers-6496d4bbf-", Namespace:"calico-system", SelfLink:"", UID:"10badecf-9cf2-455b-8b0e-b7541f200545", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6496d4bbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b", Pod:"calico-kube-controllers-6496d4bbf-28tcw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie42b02478fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:42.351036 containerd[1459]: 2025-01-13 21:22:42.320 [INFO][5630] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:42.351036 containerd[1459]: 2025-01-13 21:22:42.320 [INFO][5630] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" iface="eth0" netns="" Jan 13 21:22:42.351036 containerd[1459]: 2025-01-13 21:22:42.320 [INFO][5630] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:42.351036 containerd[1459]: 2025-01-13 21:22:42.321 [INFO][5630] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:42.351036 containerd[1459]: 2025-01-13 21:22:42.339 [INFO][5638] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" HandleID="k8s-pod-network.d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Workload="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:42.351036 containerd[1459]: 2025-01-13 21:22:42.339 [INFO][5638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:42.351036 containerd[1459]: 2025-01-13 21:22:42.339 [INFO][5638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:42.351036 containerd[1459]: 2025-01-13 21:22:42.345 [WARNING][5638] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" HandleID="k8s-pod-network.d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Workload="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:42.351036 containerd[1459]: 2025-01-13 21:22:42.345 [INFO][5638] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" HandleID="k8s-pod-network.d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Workload="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:42.351036 containerd[1459]: 2025-01-13 21:22:42.346 [INFO][5638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:42.351036 containerd[1459]: 2025-01-13 21:22:42.348 [INFO][5630] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:42.351036 containerd[1459]: time="2025-01-13T21:22:42.351007019Z" level=info msg="TearDown network for sandbox \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\" successfully" Jan 13 21:22:42.351036 containerd[1459]: time="2025-01-13T21:22:42.351033771Z" level=info msg="StopPodSandbox for \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\" returns successfully" Jan 13 21:22:42.351734 containerd[1459]: time="2025-01-13T21:22:42.351693668Z" level=info msg="RemovePodSandbox for \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\"" Jan 13 21:22:42.351734 containerd[1459]: time="2025-01-13T21:22:42.351732634Z" level=info msg="Forcibly stopping sandbox \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\"" Jan 13 21:22:42.415695 containerd[1459]: 2025-01-13 21:22:42.385 [WARNING][5661] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0", GenerateName:"calico-kube-controllers-6496d4bbf-", Namespace:"calico-system", SelfLink:"", UID:"10badecf-9cf2-455b-8b0e-b7541f200545", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6496d4bbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"917529da3013d909add0fb17c8ede12053dfe229ed235aff5a25a14c7553877b", Pod:"calico-kube-controllers-6496d4bbf-28tcw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie42b02478fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:22:42.415695 containerd[1459]: 2025-01-13 21:22:42.385 [INFO][5661] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:42.415695 containerd[1459]: 2025-01-13 21:22:42.385 [INFO][5661] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" iface="eth0" netns="" Jan 13 21:22:42.415695 containerd[1459]: 2025-01-13 21:22:42.385 [INFO][5661] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:42.415695 containerd[1459]: 2025-01-13 21:22:42.385 [INFO][5661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:42.415695 containerd[1459]: 2025-01-13 21:22:42.405 [INFO][5669] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" HandleID="k8s-pod-network.d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Workload="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:42.415695 containerd[1459]: 2025-01-13 21:22:42.405 [INFO][5669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:22:42.415695 containerd[1459]: 2025-01-13 21:22:42.405 [INFO][5669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:22:42.415695 containerd[1459]: 2025-01-13 21:22:42.410 [WARNING][5669] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" HandleID="k8s-pod-network.d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Workload="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:42.415695 containerd[1459]: 2025-01-13 21:22:42.410 [INFO][5669] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" HandleID="k8s-pod-network.d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Workload="localhost-k8s-calico--kube--controllers--6496d4bbf--28tcw-eth0" Jan 13 21:22:42.415695 containerd[1459]: 2025-01-13 21:22:42.411 [INFO][5669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:22:42.415695 containerd[1459]: 2025-01-13 21:22:42.413 [INFO][5661] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8" Jan 13 21:22:42.416137 containerd[1459]: time="2025-01-13T21:22:42.415732228Z" level=info msg="TearDown network for sandbox \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\" successfully" Jan 13 21:22:42.454052 containerd[1459]: time="2025-01-13T21:22:42.454017871Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:22:42.454052 containerd[1459]: time="2025-01-13T21:22:42.454061636Z" level=info msg="RemovePodSandbox \"d3c137144388ef5df64da1c7535e8149e08abac8a533f20e0dff4e4921c5fee8\" returns successfully" Jan 13 21:22:43.311155 systemd[1]: Started sshd@17-10.0.0.66:22-10.0.0.1:40048.service - OpenSSH per-connection server daemon (10.0.0.1:40048). Jan 13 21:22:43.344240 sshd[5678]: Accepted publickey for core from 10.0.0.1 port 40048 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:43.346052 sshd[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:43.350120 systemd-logind[1443]: New session 18 of user core. Jan 13 21:22:43.357978 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:22:43.472245 sshd[5678]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:43.476933 systemd[1]: sshd@17-10.0.0.66:22-10.0.0.1:40048.service: Deactivated successfully. Jan 13 21:22:43.479077 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:22:43.479804 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:22:43.480750 systemd-logind[1443]: Removed session 18. Jan 13 21:22:48.483744 systemd[1]: Started sshd@18-10.0.0.66:22-10.0.0.1:35904.service - OpenSSH per-connection server daemon (10.0.0.1:35904). Jan 13 21:22:48.513804 sshd[5720]: Accepted publickey for core from 10.0.0.1 port 35904 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:48.515213 sshd[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:48.518874 systemd-logind[1443]: New session 19 of user core. Jan 13 21:22:48.524996 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:22:48.626692 sshd[5720]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:48.639743 systemd[1]: sshd@18-10.0.0.66:22-10.0.0.1:35904.service: Deactivated successfully. Jan 13 21:22:48.641659 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:22:48.643457 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:22:48.645249 systemd[1]: Started sshd@19-10.0.0.66:22-10.0.0.1:35916.service - OpenSSH per-connection server daemon (10.0.0.1:35916). Jan 13 21:22:48.646070 systemd-logind[1443]: Removed session 19. Jan 13 21:22:48.682293 sshd[5735]: Accepted publickey for core from 10.0.0.1 port 35916 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:48.683650 sshd[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:48.687374 systemd-logind[1443]: New session 20 of user core. Jan 13 21:22:48.696975 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:22:48.866879 sshd[5735]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:48.878700 systemd[1]: sshd@19-10.0.0.66:22-10.0.0.1:35916.service: Deactivated successfully. Jan 13 21:22:48.880399 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:22:48.881766 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:22:48.883022 systemd[1]: Started sshd@20-10.0.0.66:22-10.0.0.1:35932.service - OpenSSH per-connection server daemon (10.0.0.1:35932). Jan 13 21:22:48.883945 systemd-logind[1443]: Removed session 20. Jan 13 21:22:48.914815 sshd[5747]: Accepted publickey for core from 10.0.0.1 port 35932 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:48.916187 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:48.919898 systemd-logind[1443]: New session 21 of user core. Jan 13 21:22:48.928972 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:22:50.244302 sshd[5747]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:50.261236 systemd[1]: Started sshd@21-10.0.0.66:22-10.0.0.1:35946.service - OpenSSH per-connection server daemon (10.0.0.1:35946). Jan 13 21:22:50.261944 systemd[1]: sshd@20-10.0.0.66:22-10.0.0.1:35932.service: Deactivated successfully. Jan 13 21:22:50.265277 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:22:50.266383 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:22:50.269546 systemd-logind[1443]: Removed session 21. Jan 13 21:22:50.291609 sshd[5764]: Accepted publickey for core from 10.0.0.1 port 35946 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:50.293071 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:50.296997 systemd-logind[1443]: New session 22 of user core. Jan 13 21:22:50.306981 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:22:50.497871 sshd[5764]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:50.506774 systemd[1]: sshd@21-10.0.0.66:22-10.0.0.1:35946.service: Deactivated successfully. Jan 13 21:22:50.508478 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:22:50.509768 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:22:50.510985 systemd[1]: Started sshd@22-10.0.0.66:22-10.0.0.1:35956.service - OpenSSH per-connection server daemon (10.0.0.1:35956). Jan 13 21:22:50.511640 systemd-logind[1443]: Removed session 22. Jan 13 21:22:50.541328 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 35956 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:50.542709 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:50.546279 systemd-logind[1443]: New session 23 of user core. Jan 13 21:22:50.554971 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:22:50.660930 sshd[5779]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:50.664934 systemd[1]: sshd@22-10.0.0.66:22-10.0.0.1:35956.service: Deactivated successfully. Jan 13 21:22:50.666899 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:22:50.667585 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:22:50.668476 systemd-logind[1443]: Removed session 23. Jan 13 21:22:52.433667 kubelet[2593]: I0113 21:22:52.433631 2593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:22:55.673338 systemd[1]: Started sshd@23-10.0.0.66:22-10.0.0.1:35972.service - OpenSSH per-connection server daemon (10.0.0.1:35972). Jan 13 21:22:55.703121 sshd[5807]: Accepted publickey for core from 10.0.0.1 port 35972 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:55.704440 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:55.708381 systemd-logind[1443]: New session 24 of user core. Jan 13 21:22:55.714976 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:22:55.818395 sshd[5807]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:55.822192 systemd[1]: sshd@23-10.0.0.66:22-10.0.0.1:35972.service: Deactivated successfully. Jan 13 21:22:55.824388 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:22:55.825029 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:22:55.825814 systemd-logind[1443]: Removed session 24. Jan 13 21:22:58.459011 kubelet[2593]: E0113 21:22:58.458968 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:22:58.592373 kubelet[2593]: I0113 21:22:58.592322 2593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:22:59.459094 kubelet[2593]: E0113 21:22:59.459054 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:00.830487 systemd[1]: Started sshd@24-10.0.0.66:22-10.0.0.1:59038.service - OpenSSH per-connection server daemon (10.0.0.1:59038). Jan 13 21:23:00.864985 sshd[5825]: Accepted publickey for core from 10.0.0.1 port 59038 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:23:00.866480 sshd[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:00.870347 systemd-logind[1443]: New session 25 of user core. Jan 13 21:23:00.880984 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:23:00.987452 sshd[5825]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:00.991676 systemd[1]: sshd@24-10.0.0.66:22-10.0.0.1:59038.service: Deactivated successfully. Jan 13 21:23:00.993661 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:23:00.994300 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:23:00.995260 systemd-logind[1443]: Removed session 25. Jan 13 21:23:05.164757 kubelet[2593]: E0113 21:23:05.164701 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:05.999196 systemd[1]: Started sshd@25-10.0.0.66:22-10.0.0.1:59040.service - OpenSSH per-connection server daemon (10.0.0.1:59040). Jan 13 21:23:06.034991 sshd[5863]: Accepted publickey for core from 10.0.0.1 port 59040 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:23:06.036888 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:06.040916 systemd-logind[1443]: New session 26 of user core. Jan 13 21:23:06.046028 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:23:06.156753 sshd[5863]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:06.161150 systemd[1]: sshd@25-10.0.0.66:22-10.0.0.1:59040.service: Deactivated successfully. Jan 13 21:23:06.163471 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:23:06.164331 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:23:06.165358 systemd-logind[1443]: Removed session 26.