Jan 30 13:41:54.888598 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:41:54.888625 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:41:54.888639 kernel: BIOS-provided physical RAM map: Jan 30 13:41:54.888648 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:41:54.888656 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:41:54.888665 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:41:54.888676 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 30 13:41:54.888685 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 30 13:41:54.888693 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:41:54.888705 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 13:41:54.888714 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:41:54.888723 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:41:54.888731 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 13:41:54.888740 kernel: NX (Execute Disable) protection: active Jan 30 13:41:54.888751 kernel: APIC: Static calls initialized Jan 30 13:41:54.888764 kernel: SMBIOS 2.8 present. Jan 30 13:41:54.888783 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 30 13:41:54.888792 kernel: Hypervisor detected: KVM Jan 30 13:41:54.888802 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:41:54.888812 kernel: kvm-clock: using sched offset of 2199074981 cycles Jan 30 13:41:54.888822 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:41:54.888832 kernel: tsc: Detected 2794.748 MHz processor Jan 30 13:41:54.888842 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:41:54.888852 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:41:54.888862 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 30 13:41:54.888875 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:41:54.888885 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:41:54.888895 kernel: Using GB pages for direct mapping Jan 30 13:41:54.888905 kernel: ACPI: Early table checksum verification disabled Jan 30 13:41:54.888915 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 30 13:41:54.888925 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:41:54.888935 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:41:54.888957 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:41:54.888970 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 30 13:41:54.888981 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:41:54.888990 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:41:54.889000 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:41:54.889010 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:41:54.889020 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 30 13:41:54.889030 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 30 13:41:54.889045 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 30 13:41:54.889058 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 30 13:41:54.889068 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 30 13:41:54.889079 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 30 13:41:54.889089 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 30 13:41:54.889099 kernel: No NUMA configuration found Jan 30 13:41:54.889110 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 30 13:41:54.889120 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 30 13:41:54.889133 kernel: Zone ranges: Jan 30 13:41:54.889144 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:41:54.889154 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 30 13:41:54.889164 kernel: Normal empty Jan 30 13:41:54.889174 kernel: Movable zone start for each node Jan 30 13:41:54.889185 kernel: Early memory node ranges Jan 30 13:41:54.889195 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:41:54.889205 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 30 13:41:54.889216 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 30 13:41:54.889229 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:41:54.889239 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:41:54.889249 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 30 13:41:54.889260 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:41:54.889270 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:41:54.889281 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:41:54.889291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:41:54.889301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:41:54.889311 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:41:54.889324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:41:54.889335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:41:54.889345 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:41:54.889356 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:41:54.889366 kernel: TSC deadline timer available Jan 30 13:41:54.889376 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:41:54.889387 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:41:54.889397 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:41:54.889407 kernel: kvm-guest: setup PV sched yield Jan 30 13:41:54.889417 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 13:41:54.889430 kernel: Booting paravirtualized kernel on KVM Jan 30 13:41:54.889441 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:41:54.889452 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:41:54.889462 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:41:54.889473 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:41:54.889483 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:41:54.889493 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:41:54.889504 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:41:54.889516 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:41:54.889530 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:41:54.889540 kernel: random: crng init done Jan 30 13:41:54.889550 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:41:54.889560 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:41:54.889570 kernel: Fallback order for Node 0: 0 Jan 30 13:41:54.889581 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 30 13:41:54.889591 kernel: Policy zone: DMA32 Jan 30 13:41:54.889602 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:41:54.889615 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 30 13:41:54.889625 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:41:54.889636 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:41:54.889646 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:41:54.889656 kernel: Dynamic Preempt: voluntary Jan 30 13:41:54.889667 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:41:54.889678 kernel: rcu: RCU event tracing is enabled. Jan 30 13:41:54.889689 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:41:54.889699 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:41:54.889713 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:41:54.889723 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:41:54.889733 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:41:54.889744 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:41:54.889754 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:41:54.889765 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:41:54.889784 kernel: Console: colour VGA+ 80x25 Jan 30 13:41:54.889794 kernel: printk: console [ttyS0] enabled Jan 30 13:41:54.889804 kernel: ACPI: Core revision 20230628 Jan 30 13:41:54.889818 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:41:54.889829 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:41:54.889839 kernel: x2apic enabled Jan 30 13:41:54.889849 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:41:54.889860 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:41:54.889870 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:41:54.889881 kernel: kvm-guest: setup PV IPIs Jan 30 13:41:54.889903 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:41:54.889914 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:41:54.889927 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 30 13:41:54.889939 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:41:54.889963 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:41:54.889977 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:41:54.889988 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:41:54.889999 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:41:54.890010 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:41:54.890024 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:41:54.890035 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:41:54.890046 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:41:54.890057 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:41:54.890068 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:41:54.890079 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:41:54.890091 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:41:54.890102 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:41:54.890113 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:41:54.890127 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:41:54.890138 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:41:54.890148 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:41:54.890160 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:41:54.890171 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:41:54.890182 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:41:54.890193 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:41:54.890204 kernel: landlock: Up and running. Jan 30 13:41:54.890215 kernel: SELinux: Initializing. Jan 30 13:41:54.890228 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:41:54.890239 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:41:54.890251 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:41:54.890262 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:41:54.890273 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:41:54.890284 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:41:54.890295 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:41:54.890306 kernel: ... version: 0 Jan 30 13:41:54.890319 kernel: ... bit width: 48 Jan 30 13:41:54.890330 kernel: ... generic registers: 6 Jan 30 13:41:54.890341 kernel: ... value mask: 0000ffffffffffff Jan 30 13:41:54.890352 kernel: ... max period: 00007fffffffffff Jan 30 13:41:54.890363 kernel: ... fixed-purpose events: 0 Jan 30 13:41:54.890374 kernel: ... event mask: 000000000000003f Jan 30 13:41:54.890384 kernel: signal: max sigframe size: 1776 Jan 30 13:41:54.890395 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:41:54.890406 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:41:54.890417 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:41:54.890431 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:41:54.890441 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:41:54.890452 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:41:54.890463 kernel: smpboot: Max logical packages: 1 Jan 30 13:41:54.890474 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 30 13:41:54.890485 kernel: devtmpfs: initialized Jan 30 13:41:54.890496 kernel: x86/mm: Memory block size: 128MB Jan 30 13:41:54.890507 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:41:54.890518 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:41:54.890532 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:41:54.890543 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:41:54.890554 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:41:54.890565 kernel: audit: type=2000 audit(1738244514.339:1): state=initialized audit_enabled=0 res=1 Jan 30 13:41:54.890576 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:41:54.890587 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:41:54.890597 kernel: cpuidle: using governor menu Jan 30 13:41:54.890608 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:41:54.890619 kernel: dca service started, version 1.12.1 Jan 30 13:41:54.890633 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:41:54.890644 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:41:54.890655 kernel: PCI: Using configuration type 1 for base access Jan 30 13:41:54.890666 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:41:54.890677 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:41:54.890688 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:41:54.890699 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:41:54.890710 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:41:54.890721 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:41:54.890734 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:41:54.890745 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:41:54.890756 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:41:54.890767 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:41:54.890786 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:41:54.890797 kernel: ACPI: Interpreter enabled Jan 30 13:41:54.890808 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:41:54.890819 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:41:54.890830 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:41:54.890844 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:41:54.890854 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:41:54.890865 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:41:54.891117 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:41:54.891280 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:41:54.891430 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:41:54.891444 kernel: PCI host bridge to bus 0000:00 Jan 30 13:41:54.891604 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:41:54.891743 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:41:54.891893 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:41:54.892053 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:41:54.892193 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:41:54.892375 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 30 13:41:54.892560 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:41:54.892748 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:41:54.892926 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:41:54.893103 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 30 13:41:54.893254 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 30 13:41:54.893403 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 30 13:41:54.893557 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:41:54.893727 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:41:54.893891 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 30 13:41:54.894060 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 30 13:41:54.894213 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 30 13:41:54.894388 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:41:54.894608 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:41:54.894801 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 30 13:41:54.894936 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 30 13:41:54.895084 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:41:54.895206 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 30 13:41:54.895326 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 30 13:41:54.895445 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 30 13:41:54.895566 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 30 13:41:54.895696 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:41:54.895831 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:41:54.896041 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:41:54.896168 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 30 13:41:54.896288 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 30 13:41:54.896415 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:41:54.896537 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 13:41:54.896547 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:41:54.896559 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:41:54.896567 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:41:54.896575 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:41:54.896583 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:41:54.896590 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:41:54.896598 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:41:54.896606 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:41:54.896613 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:41:54.896621 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:41:54.896631 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:41:54.896639 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:41:54.896646 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:41:54.896654 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:41:54.896662 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:41:54.896669 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:41:54.896677 kernel: iommu: Default domain type: Translated Jan 30 13:41:54.896685 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:41:54.896692 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:41:54.896702 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:41:54.896710 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:41:54.896717 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 30 13:41:54.896893 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:41:54.897046 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:41:54.897167 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:41:54.897177 kernel: vgaarb: loaded Jan 30 13:41:54.897185 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:41:54.897197 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:41:54.897204 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:41:54.897212 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:41:54.897220 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:41:54.897227 kernel: pnp: PnP ACPI init Jan 30 13:41:54.897360 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:41:54.897371 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:41:54.897379 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:41:54.897390 kernel: NET: Registered PF_INET protocol family Jan 30 13:41:54.897398 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:41:54.897406 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:41:54.897414 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:41:54.897421 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:41:54.897429 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:41:54.897437 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:41:54.897444 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:41:54.897452 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:41:54.897462 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:41:54.897470 kernel: NET: Registered PF_XDP protocol family Jan 30 13:41:54.897582 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:41:54.897693 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:41:54.897813 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:41:54.897924 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:41:54.898089 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:41:54.898199 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 30 13:41:54.898213 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:41:54.898221 kernel: Initialise system trusted keyrings Jan 30 13:41:54.898229 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:41:54.898236 kernel: Key type asymmetric registered Jan 30 13:41:54.898244 kernel: Asymmetric key parser 'x509' registered Jan 30 13:41:54.898251 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:41:54.898259 kernel: io scheduler mq-deadline registered Jan 30 13:41:54.898267 kernel: io scheduler kyber registered Jan 30 13:41:54.898274 kernel: io scheduler bfq registered Jan 30 13:41:54.898284 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:41:54.898292 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:41:54.898300 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:41:54.898308 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:41:54.898315 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:41:54.898323 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:41:54.898331 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:41:54.898339 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:41:54.898346 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:41:54.898356 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:41:54.898478 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:41:54.898592 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:41:54.898704 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:41:54 UTC (1738244514) Jan 30 13:41:54.898823 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:41:54.898834 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:41:54.898841 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:41:54.898849 kernel: Segment Routing with IPv6 Jan 30 13:41:54.898860 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:41:54.898868 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:41:54.898875 kernel: Key type dns_resolver registered Jan 30 13:41:54.898883 kernel: IPI shorthand broadcast: enabled Jan 30 13:41:54.898891 kernel: sched_clock: Marking stable (589003239, 104650319)->(706418697, -12765139) Jan 30 13:41:54.898898 kernel: registered taskstats version 1 Jan 30 13:41:54.898906 kernel: Loading compiled-in X.509 certificates Jan 30 13:41:54.898914 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:41:54.898921 kernel: Key type .fscrypt registered Jan 30 13:41:54.898931 kernel: Key type fscrypt-provisioning registered Jan 30 13:41:54.898939 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:41:54.898958 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:41:54.898966 kernel: ima: No architecture policies found Jan 30 13:41:54.898974 kernel: clk: Disabling unused clocks Jan 30 13:41:54.898982 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:41:54.898989 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:41:54.898997 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:41:54.899005 kernel: Run /init as init process Jan 30 13:41:54.899015 kernel: with arguments: Jan 30 13:41:54.899022 kernel: /init Jan 30 13:41:54.899030 kernel: with environment: Jan 30 13:41:54.899037 kernel: HOME=/ Jan 30 13:41:54.899045 kernel: TERM=linux Jan 30 13:41:54.899052 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:41:54.899062 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:41:54.899072 systemd[1]: Detected virtualization kvm. Jan 30 13:41:54.899082 systemd[1]: Detected architecture x86-64. Jan 30 13:41:54.899090 systemd[1]: Running in initrd. Jan 30 13:41:54.899098 systemd[1]: No hostname configured, using default hostname. Jan 30 13:41:54.899106 systemd[1]: Hostname set to . Jan 30 13:41:54.899114 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:41:54.899122 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:41:54.899131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:41:54.899139 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:41:54.899150 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:41:54.899169 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:41:54.899180 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:41:54.899188 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:41:54.899198 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:41:54.899209 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:41:54.899217 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:41:54.899226 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:41:54.899234 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:41:54.899242 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:41:54.899250 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:41:54.899258 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:41:54.899267 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:41:54.899277 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:41:54.899285 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:41:54.899294 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:41:54.899302 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:41:54.899310 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:41:54.899319 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:41:54.899327 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:41:54.899335 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:41:54.899346 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:41:54.899356 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:41:54.899364 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:41:54.899373 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:41:54.899381 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:41:54.899390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:41:54.899398 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:41:54.899407 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:41:54.899415 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:41:54.899426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:41:54.899452 systemd-journald[192]: Collecting audit messages is disabled. Jan 30 13:41:54.899472 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:41:54.899481 systemd-journald[192]: Journal started Jan 30 13:41:54.899500 systemd-journald[192]: Runtime Journal (/run/log/journal/13b700fa66a44605956f35cab8c18140) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:41:54.894809 systemd-modules-load[193]: Inserted module 'overlay' Jan 30 13:41:54.928748 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:41:54.928787 kernel: Bridge firewalling registered Jan 30 13:41:54.921326 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 30 13:41:54.931473 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:41:54.933640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:41:54.935008 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:41:54.947193 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:41:54.948009 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:41:54.949077 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:41:54.952068 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:41:54.962435 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:41:54.965990 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:41:54.968858 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:41:54.983156 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:41:54.984637 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:41:54.988365 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:41:55.005472 dracut-cmdline[228]: dracut-dracut-053 Jan 30 13:41:55.008346 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:41:55.015671 systemd-resolved[226]: Positive Trust Anchors: Jan 30 13:41:55.015689 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:41:55.015720 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:41:55.018161 systemd-resolved[226]: Defaulting to hostname 'linux'. Jan 30 13:41:55.019270 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:41:55.025403 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:41:55.103979 kernel: SCSI subsystem initialized Jan 30 13:41:55.112969 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:41:55.123972 kernel: iscsi: registered transport (tcp) Jan 30 13:41:55.144974 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:41:55.145001 kernel: QLogic iSCSI HBA Driver Jan 30 13:41:55.199648 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:41:55.212094 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:41:55.236380 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:41:55.236477 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:41:55.236492 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:41:55.282982 kernel: raid6: avx2x4 gen() 28593 MB/s Jan 30 13:41:55.299969 kernel: raid6: avx2x2 gen() 30902 MB/s Jan 30 13:41:55.317047 kernel: raid6: avx2x1 gen() 26002 MB/s Jan 30 13:41:55.317067 kernel: raid6: using algorithm avx2x2 gen() 30902 MB/s Jan 30 13:41:55.335055 kernel: raid6: .... xor() 19988 MB/s, rmw enabled Jan 30 13:41:55.335073 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:41:55.354971 kernel: xor: automatically using best checksumming function avx Jan 30 13:41:55.502993 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:41:55.516550 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:41:55.529137 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:41:55.540856 systemd-udevd[411]: Using default interface naming scheme 'v255'. Jan 30 13:41:55.546634 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:41:55.553104 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:41:55.567843 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 30 13:41:55.600722 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:41:55.609136 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:41:55.685116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:41:55.697220 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:41:55.709205 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:41:55.711705 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:41:55.711815 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:41:55.712517 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:41:55.729346 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:41:55.760077 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:41:55.760097 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:41:55.760267 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:41:55.760281 kernel: GPT:9289727 != 19775487 Jan 30 13:41:55.760293 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:41:55.760313 kernel: GPT:9289727 != 19775487 Jan 30 13:41:55.760325 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:41:55.760337 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:41:55.760349 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:41:55.760362 kernel: libata version 3.00 loaded. Jan 30 13:41:55.760374 kernel: AES CTR mode by8 optimization enabled Jan 30 13:41:55.729217 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:41:55.741166 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:41:55.767452 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:41:55.803416 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:41:55.803432 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:41:55.803582 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:41:55.803718 kernel: scsi host0: ahci Jan 30 13:41:55.803878 kernel: scsi host1: ahci Jan 30 13:41:55.804050 kernel: scsi host2: ahci Jan 30 13:41:55.804194 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (455) Jan 30 13:41:55.804206 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (457) Jan 30 13:41:55.804216 kernel: scsi host3: ahci Jan 30 13:41:55.804355 kernel: scsi host4: ahci Jan 30 13:41:55.804496 kernel: scsi host5: ahci Jan 30 13:41:55.804639 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 30 13:41:55.804657 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 30 13:41:55.804668 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 30 13:41:55.804680 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 30 13:41:55.804690 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 30 13:41:55.804700 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 30 13:41:55.768800 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:41:55.768916 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:41:55.772878 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:41:55.774099 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:41:55.774276 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:41:55.777145 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:41:55.786199 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:41:55.817581 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:41:55.854132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:41:55.860259 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:41:55.869538 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:41:55.871096 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:41:55.881235 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:41:55.892091 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:41:55.893980 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:41:55.905458 disk-uuid[564]: Primary Header is updated. Jan 30 13:41:55.905458 disk-uuid[564]: Secondary Entries is updated. Jan 30 13:41:55.905458 disk-uuid[564]: Secondary Header is updated. Jan 30 13:41:55.909981 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:41:55.914978 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:41:55.916060 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:41:56.115975 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:41:56.123994 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:41:56.124092 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:41:56.124106 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:41:56.124984 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:41:56.125986 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:41:56.126988 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:41:56.127013 kernel: ata3.00: applying bridge limits Jan 30 13:41:56.128138 kernel: ata3.00: configured for UDMA/100 Jan 30 13:41:56.128986 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:41:56.172473 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:41:56.185510 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:41:56.185528 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:41:56.914972 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:41:56.915421 disk-uuid[569]: The operation has completed successfully. Jan 30 13:41:56.946752 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:41:56.946879 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:41:56.972104 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:41:56.977249 sh[589]: Success Jan 30 13:41:56.989972 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:41:57.023303 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:41:57.037547 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:41:57.040367 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:41:57.053269 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:41:57.053306 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:41:57.053319 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:41:57.055114 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:41:57.055129 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:41:57.059875 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:41:57.060620 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:41:57.076194 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:41:57.078287 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:41:57.087007 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:41:57.087048 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:41:57.087061 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:41:57.090982 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:41:57.101660 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:41:57.103841 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:41:57.142618 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:41:57.151117 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:41:57.204477 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:41:57.209589 ignition[721]: Ignition 2.19.0 Jan 30 13:41:57.209745 ignition[721]: Stage: fetch-offline Jan 30 13:41:57.213110 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:41:57.209791 ignition[721]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:41:57.209800 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:41:57.209892 ignition[721]: parsed url from cmdline: "" Jan 30 13:41:57.209895 ignition[721]: no config URL provided Jan 30 13:41:57.209900 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:41:57.209909 ignition[721]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:41:57.209935 ignition[721]: op(1): [started] loading QEMU firmware config module Jan 30 13:41:57.209941 ignition[721]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:41:57.216667 ignition[721]: op(1): [finished] loading QEMU firmware config module Jan 30 13:41:57.244511 systemd-networkd[778]: lo: Link UP Jan 30 13:41:57.244526 systemd-networkd[778]: lo: Gained carrier Jan 30 13:41:57.246221 systemd-networkd[778]: Enumeration completed Jan 30 13:41:57.246558 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:41:57.246613 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:41:57.246618 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:41:57.247676 systemd-networkd[778]: eth0: Link UP Jan 30 13:41:57.247680 systemd-networkd[778]: eth0: Gained carrier Jan 30 13:41:57.247687 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:41:57.250050 systemd[1]: Reached target network.target - Network. Jan 30 13:41:57.272300 ignition[721]: parsing config with SHA512: b530db51627b0c4bea4f1a7065933665fc1bfedf4f3484a86d98ee324ff13661ab5e184c0efc130f02bbcd7a549ab12660e3f7e524692f1352c230f0ef496dd5 Jan 30 13:41:57.276941 unknown[721]: fetched base config from "system" Jan 30 13:41:57.276974 unknown[721]: fetched user config from "qemu" Jan 30 13:41:57.277468 ignition[721]: fetch-offline: fetch-offline passed Jan 30 13:41:57.277549 ignition[721]: Ignition finished successfully Jan 30 13:41:57.279044 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:41:57.280096 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:41:57.281977 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:41:57.288229 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:41:57.300682 ignition[783]: Ignition 2.19.0 Jan 30 13:41:57.300694 ignition[783]: Stage: kargs Jan 30 13:41:57.300864 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:41:57.300875 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:41:57.304792 ignition[783]: kargs: kargs passed Jan 30 13:41:57.305449 ignition[783]: Ignition finished successfully Jan 30 13:41:57.309251 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:41:57.321067 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:41:57.336389 ignition[791]: Ignition 2.19.0 Jan 30 13:41:57.336401 ignition[791]: Stage: disks Jan 30 13:41:57.336569 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:41:57.336581 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:41:57.337404 ignition[791]: disks: disks passed Jan 30 13:41:57.337446 ignition[791]: Ignition finished successfully Jan 30 13:41:57.343974 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:41:57.346251 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:41:57.347440 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:41:57.347508 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:41:57.350969 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:41:57.353657 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:41:57.367084 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:41:57.382326 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:41:57.389503 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:41:57.403054 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:41:57.494969 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:41:57.495090 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:41:57.496662 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:41:57.507042 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:41:57.509235 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:41:57.510942 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:41:57.516248 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (809) Jan 30 13:41:57.516277 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:41:57.511000 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:41:57.523089 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:41:57.523106 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:41:57.523117 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:41:57.511024 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:41:57.519375 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:41:57.524043 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:41:57.526532 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:41:57.563137 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:41:57.568340 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:41:57.573118 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:41:57.577940 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:41:57.666770 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:41:57.678089 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:41:57.679869 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:41:57.685972 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:41:57.706093 ignition[922]: INFO : Ignition 2.19.0 Jan 30 13:41:57.706093 ignition[922]: INFO : Stage: mount Jan 30 13:41:57.708773 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:41:57.708773 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:41:57.708773 ignition[922]: INFO : mount: mount passed Jan 30 13:41:57.708773 ignition[922]: INFO : Ignition finished successfully Jan 30 13:41:57.706364 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:41:57.711090 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:41:57.723118 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:41:58.052651 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:41:58.065114 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:41:58.070980 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (935) Jan 30 13:41:58.073017 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:41:58.073044 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:41:58.073059 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:41:58.075976 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:41:58.077741 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:41:58.098847 ignition[953]: INFO : Ignition 2.19.0 Jan 30 13:41:58.098847 ignition[953]: INFO : Stage: files Jan 30 13:41:58.100583 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:41:58.100583 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:41:58.103238 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:41:58.104526 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:41:58.104526 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:41:58.108408 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:41:58.109897 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:41:58.111302 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:41:58.110378 unknown[953]: wrote ssh authorized keys file for user: core Jan 30 13:41:58.113906 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:41:58.113906 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:41:58.113906 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:41:58.113906 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:41:58.146964 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:41:58.226329 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:41:58.226329 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:41:58.231151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:41:58.231151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:41:58.231151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:41:58.231151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:41:58.231151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:41:58.231151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:41:58.231151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:41:58.231151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:41:58.231151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:41:58.231151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:41:58.231151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:41:58.231151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:41:58.231151 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:41:58.533155 systemd-networkd[778]: eth0: Gained IPv6LL Jan 30 13:41:58.612186 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:41:59.116015 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:41:59.116015 ignition[953]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 13:41:59.120096 ignition[953]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:41:59.120096 ignition[953]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:41:59.120096 ignition[953]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 13:41:59.120096 ignition[953]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 13:41:59.120096 ignition[953]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:41:59.120096 ignition[953]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:41:59.120096 ignition[953]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 13:41:59.120096 ignition[953]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 30 13:41:59.120096 ignition[953]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:41:59.120096 ignition[953]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:41:59.120096 ignition[953]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 30 13:41:59.120096 ignition[953]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:41:59.149039 ignition[953]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:41:59.154268 ignition[953]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:41:59.156025 ignition[953]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:41:59.156025 ignition[953]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:41:59.156025 ignition[953]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:41:59.156025 ignition[953]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:41:59.156025 ignition[953]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:41:59.156025 ignition[953]: INFO : files: files passed Jan 30 13:41:59.156025 ignition[953]: INFO : Ignition finished successfully Jan 30 13:41:59.157391 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:41:59.166188 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:41:59.168691 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:41:59.170300 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:41:59.170445 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:41:59.178455 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:41:59.181312 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:41:59.183162 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:41:59.186191 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:41:59.184353 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:41:59.186402 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:41:59.199221 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:41:59.223941 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:41:59.224131 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:41:59.226677 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:41:59.229027 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:41:59.231434 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:41:59.246249 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:41:59.260634 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:41:59.273257 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:41:59.284458 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:41:59.287106 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:41:59.288521 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:41:59.290686 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:41:59.290825 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:41:59.293446 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:41:59.295213 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:41:59.297470 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:41:59.299742 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:41:59.302048 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:41:59.304449 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:41:59.306798 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:41:59.309309 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:41:59.311514 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:41:59.313968 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:41:59.315915 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:41:59.316109 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:41:59.318665 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:41:59.320257 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:41:59.322574 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:41:59.322693 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:41:59.325050 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:41:59.325174 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:41:59.327628 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:41:59.327747 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:41:59.329990 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:41:59.331908 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:41:59.336037 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:41:59.338252 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:41:59.340379 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:41:59.342387 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:41:59.342492 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:41:59.344615 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:41:59.344713 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:41:59.347345 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:41:59.347467 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:41:59.349654 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:41:59.349758 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:41:59.365083 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:41:59.366120 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:41:59.366236 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:41:59.369480 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:41:59.370627 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:41:59.370763 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:41:59.373345 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:41:59.373448 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:41:59.380365 ignition[1008]: INFO : Ignition 2.19.0 Jan 30 13:41:59.380365 ignition[1008]: INFO : Stage: umount Jan 30 13:41:59.380365 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:41:59.380365 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:41:59.380365 ignition[1008]: INFO : umount: umount passed Jan 30 13:41:59.380365 ignition[1008]: INFO : Ignition finished successfully Jan 30 13:41:59.379644 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:41:59.379777 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:41:59.382063 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:41:59.382170 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:41:59.385806 systemd[1]: Stopped target network.target - Network. Jan 30 13:41:59.388171 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:41:59.388235 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:41:59.390181 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:41:59.390227 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:41:59.392250 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:41:59.392298 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:41:59.394408 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:41:59.394456 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:41:59.397396 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:41:59.399563 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:41:59.402601 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:41:59.404987 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 30 13:41:59.408082 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:41:59.408236 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:41:59.410920 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:41:59.410984 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:41:59.421041 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:41:59.422823 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:41:59.422880 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:41:59.425598 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:41:59.428761 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:41:59.428885 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:41:59.441092 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:41:59.441174 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:41:59.442271 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:41:59.442318 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:41:59.442619 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:41:59.442670 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:41:59.453004 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:41:59.453140 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:41:59.458832 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:41:59.459033 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:41:59.460202 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:41:59.460252 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:41:59.462595 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:41:59.462634 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:41:59.462942 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:41:59.463066 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:41:59.464045 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:41:59.464090 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:41:59.471606 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:41:59.471671 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:41:59.491243 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:41:59.491332 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:41:59.491414 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:41:59.497980 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:41:59.498031 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:41:59.498746 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:41:59.498859 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:41:59.745363 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:41:59.745522 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:41:59.748366 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:41:59.749738 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:41:59.749826 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:41:59.757119 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:41:59.763910 systemd[1]: Switching root. Jan 30 13:41:59.801258 systemd-journald[192]: Journal stopped Jan 30 13:42:01.317043 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 30 13:42:01.317117 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:42:01.317134 kernel: SELinux: policy capability open_perms=1 Jan 30 13:42:01.317149 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:42:01.317160 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:42:01.317171 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:42:01.317183 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:42:01.317194 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:42:01.317205 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:42:01.317216 kernel: audit: type=1403 audit(1738244520.522:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:42:01.317229 systemd[1]: Successfully loaded SELinux policy in 44.674ms. Jan 30 13:42:01.317256 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.245ms. Jan 30 13:42:01.317271 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:42:01.317283 systemd[1]: Detected virtualization kvm. Jan 30 13:42:01.317295 systemd[1]: Detected architecture x86-64. Jan 30 13:42:01.317307 systemd[1]: Detected first boot. Jan 30 13:42:01.317319 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:42:01.317331 zram_generator::config[1068]: No configuration found. Jan 30 13:42:01.317344 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:42:01.317356 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:42:01.317370 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:42:01.317384 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:42:01.317396 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:42:01.317408 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:42:01.317419 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:42:01.317432 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:42:01.317444 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:42:01.317456 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:42:01.317468 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:42:01.317483 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:42:01.317495 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:42:01.317507 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:42:01.317520 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:42:01.317532 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:42:01.317544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:42:01.317556 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:42:01.317567 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:42:01.317579 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:42:01.317601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:42:01.317614 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:42:01.317626 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:42:01.317644 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:42:01.317656 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:42:01.317669 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:42:01.317681 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:42:01.317693 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:42:01.317707 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:42:01.317719 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:42:01.317731 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:42:01.317743 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:42:01.317755 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:42:01.317767 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:42:01.317779 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:42:01.317797 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:01.317815 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:42:01.317844 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:42:01.317869 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:42:01.317892 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:42:01.317920 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:42:01.319567 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:42:01.319589 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:42:01.319613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:42:01.319630 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:42:01.319646 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:42:01.319658 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:42:01.319669 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:42:01.319682 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:42:01.319694 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 13:42:01.319708 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 13:42:01.319726 kernel: fuse: init (API version 7.39) Jan 30 13:42:01.319738 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:42:01.319753 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:42:01.319765 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:42:01.319777 kernel: loop: module loaded Jan 30 13:42:01.319788 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:42:01.319801 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:42:01.319813 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:01.319865 systemd-journald[1158]: Collecting audit messages is disabled. Jan 30 13:42:01.319891 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:42:01.319905 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:42:01.319917 kernel: ACPI: bus type drm_connector registered Jan 30 13:42:01.319928 systemd-journald[1158]: Journal started Jan 30 13:42:01.319961 systemd-journald[1158]: Runtime Journal (/run/log/journal/13b700fa66a44605956f35cab8c18140) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:42:01.321929 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:42:01.325072 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:42:01.326357 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:42:01.329050 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:42:01.330446 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:42:01.331900 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:42:01.333990 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:42:01.335569 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:42:01.335803 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:42:01.337348 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:42:01.337558 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:42:01.339057 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:42:01.339266 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:42:01.340688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:42:01.340892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:42:01.342483 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:42:01.342698 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:42:01.344148 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:42:01.344413 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:42:01.346278 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:42:01.348612 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:42:01.350448 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:42:01.364718 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:42:01.371054 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:42:01.373899 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:42:01.375435 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:42:01.379146 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:42:01.384090 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:42:01.385578 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:42:01.389095 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:42:01.390435 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:42:01.392153 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:42:01.396093 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:42:01.404633 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:42:01.405110 systemd-journald[1158]: Time spent on flushing to /var/log/journal/13b700fa66a44605956f35cab8c18140 is 29.479ms for 940 entries. Jan 30 13:42:01.405110 systemd-journald[1158]: System Journal (/var/log/journal/13b700fa66a44605956f35cab8c18140) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:42:01.443724 systemd-journald[1158]: Received client request to flush runtime journal. Jan 30 13:42:01.407285 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:42:01.419649 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:42:01.428092 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:42:01.431321 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:42:01.432903 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:42:01.441899 udevadm[1212]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:42:01.444578 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:42:01.446529 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:42:01.451243 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 30 13:42:01.451266 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 30 13:42:01.458283 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:42:01.467180 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:42:01.491114 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:42:01.501078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:42:01.517976 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Jan 30 13:42:01.517996 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Jan 30 13:42:01.523580 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:42:01.942324 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:42:01.955298 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:42:01.980737 systemd-udevd[1233]: Using default interface naming scheme 'v255'. Jan 30 13:42:02.001207 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:42:02.013241 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:42:02.027734 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:42:02.046001 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1246) Jan 30 13:42:02.049673 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 13:42:02.077636 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:42:02.084967 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:42:02.092309 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:42:02.113966 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:42:02.193490 systemd-networkd[1240]: lo: Link UP Jan 30 13:42:02.193724 systemd-networkd[1240]: lo: Gained carrier Jan 30 13:42:02.195289 systemd-networkd[1240]: Enumeration completed Jan 30 13:42:02.195666 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:42:02.195670 systemd-networkd[1240]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:42:02.196410 systemd-networkd[1240]: eth0: Link UP Jan 30 13:42:02.196414 systemd-networkd[1240]: eth0: Gained carrier Jan 30 13:42:02.196424 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:42:02.214492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:42:02.215031 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:42:02.220104 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:42:02.220295 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:42:02.217603 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:42:02.222086 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:42:02.225967 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:42:02.230017 systemd-networkd[1240]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:42:02.233827 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:42:02.239294 kernel: kvm_amd: TSC scaling supported Jan 30 13:42:02.239345 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:42:02.239359 kernel: kvm_amd: Nested Paging enabled Jan 30 13:42:02.239371 kernel: kvm_amd: LBR virtualization supported Jan 30 13:42:02.240616 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:42:02.240649 kernel: kvm_amd: Virtual GIF supported Jan 30 13:42:02.258970 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:42:02.292272 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:42:02.307252 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:42:02.319181 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:42:02.329866 lvm[1280]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:42:02.365149 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:42:02.366812 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:42:02.377090 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:42:02.382233 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:42:02.426624 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:42:02.428489 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:42:02.430038 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:42:02.430066 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:42:02.431171 systemd[1]: Reached target machines.target - Containers. Jan 30 13:42:02.433302 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:42:02.446134 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:42:02.448973 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:42:02.450310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:42:02.451275 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:42:02.455384 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:42:02.458697 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:42:02.461920 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:42:02.472307 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:42:02.473072 kernel: loop0: detected capacity change from 0 to 210664 Jan 30 13:42:02.485270 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:42:02.486116 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:42:02.493971 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:42:02.517981 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 13:42:02.558336 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 13:42:02.591978 kernel: loop3: detected capacity change from 0 to 210664 Jan 30 13:42:02.600975 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 13:42:02.610973 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 13:42:02.620274 (sd-merge)[1303]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:42:02.621005 (sd-merge)[1303]: Merged extensions into '/usr'. Jan 30 13:42:02.625023 systemd[1]: Reloading requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:42:02.625037 systemd[1]: Reloading... Jan 30 13:42:02.682976 zram_generator::config[1331]: No configuration found. Jan 30 13:42:02.705540 ldconfig[1287]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:42:02.802348 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:42:02.866628 systemd[1]: Reloading finished in 241 ms. Jan 30 13:42:02.887780 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:42:02.889738 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:42:02.906126 systemd[1]: Starting ensure-sysext.service... Jan 30 13:42:02.908305 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:42:02.912375 systemd[1]: Reloading requested from client PID 1375 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:42:02.912390 systemd[1]: Reloading... Jan 30 13:42:02.932810 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:42:02.933338 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:42:02.934528 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:42:02.934835 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Jan 30 13:42:02.934916 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Jan 30 13:42:02.938332 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:42:02.938345 systemd-tmpfiles[1376]: Skipping /boot Jan 30 13:42:02.953513 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:42:02.953664 systemd-tmpfiles[1376]: Skipping /boot Jan 30 13:42:02.963974 zram_generator::config[1404]: No configuration found. Jan 30 13:42:03.078378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:42:03.142616 systemd[1]: Reloading finished in 229 ms. Jan 30 13:42:03.160530 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:42:03.175265 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:42:03.177873 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:42:03.182658 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:42:03.186487 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:42:03.189738 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:42:03.198430 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:03.198623 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:42:03.201266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:42:03.206254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:42:03.209334 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:42:03.211529 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:42:03.211721 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:03.213593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:42:03.214220 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:42:03.216564 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:42:03.216890 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:42:03.221971 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:42:03.222763 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:42:03.231651 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:42:03.233481 augenrules[1480]: No rules Jan 30 13:42:03.235232 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:42:03.239828 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:42:03.247581 systemd[1]: Finished ensure-sysext.service. Jan 30 13:42:03.249967 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:03.250378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:42:03.257181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:42:03.259812 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:42:03.262846 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:42:03.267622 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:42:03.268847 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:42:03.272106 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:42:03.275135 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:42:03.278083 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:03.278886 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:42:03.283118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:42:03.283376 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:42:03.285193 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:42:03.285408 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:42:03.286869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:42:03.287088 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:42:03.288725 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:42:03.288962 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:42:03.294695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:42:03.294791 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:42:03.294826 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:42:03.296207 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:42:03.308021 systemd-resolved[1454]: Positive Trust Anchors: Jan 30 13:42:03.308043 systemd-resolved[1454]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:42:03.308088 systemd-resolved[1454]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:42:03.312185 systemd-resolved[1454]: Defaulting to hostname 'linux'. Jan 30 13:42:03.314209 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:42:03.315506 systemd[1]: Reached target network.target - Network. Jan 30 13:42:03.316423 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:42:03.358483 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:42:03.785774 systemd-resolved[1454]: Clock change detected. Flushing caches. Jan 30 13:42:03.785809 systemd-timesyncd[1500]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:42:03.785850 systemd-timesyncd[1500]: Initial clock synchronization to Thu 2025-01-30 13:42:03.785716 UTC. Jan 30 13:42:03.786789 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:42:03.787975 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:42:03.789254 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:42:03.790521 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:42:03.791802 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:42:03.791829 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:42:03.792733 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:42:03.793959 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:42:03.795213 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:42:03.796582 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:42:03.798289 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:42:03.801447 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:42:03.804008 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:42:03.811908 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:42:03.813182 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:42:03.814258 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:42:03.815482 systemd[1]: System is tainted: cgroupsv1 Jan 30 13:42:03.815542 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:42:03.815568 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:42:03.817239 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:42:03.819910 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:42:03.822564 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:42:03.825020 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:42:03.827134 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:42:03.831791 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:42:03.834787 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:42:03.836000 jq[1517]: false Jan 30 13:42:03.838320 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:42:03.846928 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:42:03.851512 extend-filesystems[1519]: Found loop3 Jan 30 13:42:03.851512 extend-filesystems[1519]: Found loop4 Jan 30 13:42:03.851512 extend-filesystems[1519]: Found loop5 Jan 30 13:42:03.851512 extend-filesystems[1519]: Found sr0 Jan 30 13:42:03.851512 extend-filesystems[1519]: Found vda Jan 30 13:42:03.851512 extend-filesystems[1519]: Found vda1 Jan 30 13:42:03.851512 extend-filesystems[1519]: Found vda2 Jan 30 13:42:03.851512 extend-filesystems[1519]: Found vda3 Jan 30 13:42:03.851512 extend-filesystems[1519]: Found usr Jan 30 13:42:03.851512 extend-filesystems[1519]: Found vda4 Jan 30 13:42:03.851512 extend-filesystems[1519]: Found vda6 Jan 30 13:42:03.851512 extend-filesystems[1519]: Found vda7 Jan 30 13:42:03.851512 extend-filesystems[1519]: Found vda9 Jan 30 13:42:03.851512 extend-filesystems[1519]: Checking size of /dev/vda9 Jan 30 13:42:03.852952 dbus-daemon[1516]: [system] SELinux support is enabled Jan 30 13:42:03.853633 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:42:03.875929 extend-filesystems[1519]: Resized partition /dev/vda9 Jan 30 13:42:03.855704 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:42:03.880936 extend-filesystems[1544]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:42:03.883504 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:42:03.862649 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:42:03.871963 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:42:03.883832 jq[1541]: true Jan 30 13:42:03.874272 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:42:03.883717 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:42:03.884038 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:42:03.884549 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:42:03.884854 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:42:03.888213 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:42:03.890745 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:42:03.908027 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1238) Jan 30 13:42:03.924846 jq[1548]: true Jan 30 13:42:03.913342 (ntainerd)[1552]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:42:03.964641 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:42:03.965286 update_engine[1534]: I20250130 13:42:03.939867 1534 main.cc:92] Flatcar Update Engine starting Jan 30 13:42:03.965286 update_engine[1534]: I20250130 13:42:03.952569 1534 update_check_scheduler.cc:74] Next update check in 5m22s Jan 30 13:42:03.937112 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:42:03.965704 tar[1547]: linux-amd64/helm Jan 30 13:42:03.937141 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:42:03.938647 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:42:03.938668 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:42:03.948510 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:42:03.953450 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:42:03.957742 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:42:03.965238 systemd-logind[1532]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:42:03.965260 systemd-logind[1532]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:42:03.970192 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:42:03.970192 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:42:03.970192 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:42:03.965660 systemd-logind[1532]: New seat seat0. Jan 30 13:42:03.978214 extend-filesystems[1519]: Resized filesystem in /dev/vda9 Jan 30 13:42:03.966929 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:42:03.975952 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:42:03.976293 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:42:04.001310 bash[1578]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:42:04.004363 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:42:04.007152 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:42:04.013232 locksmithd[1570]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:42:04.108670 sshd_keygen[1540]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:42:04.125684 containerd[1552]: time="2025-01-30T13:42:04.125585378Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:42:04.136465 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:42:04.153545 containerd[1552]: time="2025-01-30T13:42:04.153469540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:04.155168 containerd[1552]: time="2025-01-30T13:42:04.155124133Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:42:04.155168 containerd[1552]: time="2025-01-30T13:42:04.155153228Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:42:04.155168 containerd[1552]: time="2025-01-30T13:42:04.155169649Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:42:04.155449 containerd[1552]: time="2025-01-30T13:42:04.155351820Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:42:04.155449 containerd[1552]: time="2025-01-30T13:42:04.155370966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:04.155449 containerd[1552]: time="2025-01-30T13:42:04.155435387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:42:04.155449 containerd[1552]: time="2025-01-30T13:42:04.155446989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:04.155756 containerd[1552]: time="2025-01-30T13:42:04.155731202Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:42:04.155756 containerd[1552]: time="2025-01-30T13:42:04.155752963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:04.155802 containerd[1552]: time="2025-01-30T13:42:04.155766147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:42:04.155802 containerd[1552]: time="2025-01-30T13:42:04.155776216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:04.155892 containerd[1552]: time="2025-01-30T13:42:04.155870864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:04.156501 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:42:04.157839 containerd[1552]: time="2025-01-30T13:42:04.156118939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:04.157839 containerd[1552]: time="2025-01-30T13:42:04.156282025Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:42:04.157839 containerd[1552]: time="2025-01-30T13:42:04.156294679Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:42:04.157839 containerd[1552]: time="2025-01-30T13:42:04.156386220Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:42:04.157839 containerd[1552]: time="2025-01-30T13:42:04.156437557Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:42:04.162166 containerd[1552]: time="2025-01-30T13:42:04.162120344Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:42:04.162166 containerd[1552]: time="2025-01-30T13:42:04.162171820Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:42:04.162324 containerd[1552]: time="2025-01-30T13:42:04.162186859Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:42:04.162324 containerd[1552]: time="2025-01-30T13:42:04.162207517Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:42:04.162324 containerd[1552]: time="2025-01-30T13:42:04.162221994Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:42:04.162402 containerd[1552]: time="2025-01-30T13:42:04.162345897Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:42:04.162766 containerd[1552]: time="2025-01-30T13:42:04.162733894Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:42:04.162916 containerd[1552]: time="2025-01-30T13:42:04.162871152Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:42:04.162916 containerd[1552]: time="2025-01-30T13:42:04.162895047Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:42:04.162916 containerd[1552]: time="2025-01-30T13:42:04.162912409Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:42:04.163008 containerd[1552]: time="2025-01-30T13:42:04.162926095Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:42:04.163008 containerd[1552]: time="2025-01-30T13:42:04.162937897Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:42:04.163008 containerd[1552]: time="2025-01-30T13:42:04.162949348Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:42:04.163008 containerd[1552]: time="2025-01-30T13:42:04.162962393Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:42:04.163008 containerd[1552]: time="2025-01-30T13:42:04.162975437Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:42:04.163008 containerd[1552]: time="2025-01-30T13:42:04.162998521Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:42:04.163161 containerd[1552]: time="2025-01-30T13:42:04.163011084Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:42:04.163161 containerd[1552]: time="2025-01-30T13:42:04.163023017Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:42:04.163161 containerd[1552]: time="2025-01-30T13:42:04.163043054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163161 containerd[1552]: time="2025-01-30T13:42:04.163056219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163161 containerd[1552]: time="2025-01-30T13:42:04.163082578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163161 containerd[1552]: time="2025-01-30T13:42:04.163097967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163161 containerd[1552]: time="2025-01-30T13:42:04.163110240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163161 containerd[1552]: time="2025-01-30T13:42:04.163123575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163161 containerd[1552]: time="2025-01-30T13:42:04.163134977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163161 containerd[1552]: time="2025-01-30T13:42:04.163146919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163161 containerd[1552]: time="2025-01-30T13:42:04.163159012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163428 containerd[1552]: time="2025-01-30T13:42:04.163179019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163428 containerd[1552]: time="2025-01-30T13:42:04.163190651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163428 containerd[1552]: time="2025-01-30T13:42:04.163201521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163428 containerd[1552]: time="2025-01-30T13:42:04.163214366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163428 containerd[1552]: time="2025-01-30T13:42:04.163229694Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:42:04.163428 containerd[1552]: time="2025-01-30T13:42:04.163248730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163428 containerd[1552]: time="2025-01-30T13:42:04.163259801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163428 containerd[1552]: time="2025-01-30T13:42:04.163275510Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:42:04.163428 containerd[1552]: time="2025-01-30T13:42:04.163325554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:42:04.163428 containerd[1552]: time="2025-01-30T13:42:04.163342065Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:42:04.163428 containerd[1552]: time="2025-01-30T13:42:04.163352805Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:42:04.163428 containerd[1552]: time="2025-01-30T13:42:04.163364848Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:42:04.163428 containerd[1552]: time="2025-01-30T13:42:04.163375798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163756 containerd[1552]: time="2025-01-30T13:42:04.163393752Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:42:04.163756 containerd[1552]: time="2025-01-30T13:42:04.163404041Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:42:04.163756 containerd[1552]: time="2025-01-30T13:42:04.163413559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:42:04.163830 containerd[1552]: time="2025-01-30T13:42:04.163768194Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:42:04.163830 containerd[1552]: time="2025-01-30T13:42:04.163821895Z" level=info msg="Connect containerd service" Jan 30 13:42:04.167266 containerd[1552]: time="2025-01-30T13:42:04.163849406Z" level=info msg="using legacy CRI server" Jan 30 13:42:04.167266 containerd[1552]: time="2025-01-30T13:42:04.163855468Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:42:04.167266 containerd[1552]: time="2025-01-30T13:42:04.163935708Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:42:04.167266 containerd[1552]: time="2025-01-30T13:42:04.164597409Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:42:04.167266 containerd[1552]: time="2025-01-30T13:42:04.164704159Z" level=info msg="Start subscribing containerd event" Jan 30 13:42:04.167266 containerd[1552]: time="2025-01-30T13:42:04.164737712Z" level=info msg="Start recovering state" Jan 30 13:42:04.167266 containerd[1552]: time="2025-01-30T13:42:04.164790682Z" level=info msg="Start event monitor" Jan 30 13:42:04.167266 containerd[1552]: time="2025-01-30T13:42:04.164810479Z" level=info msg="Start snapshots syncer" Jan 30 13:42:04.167266 containerd[1552]: time="2025-01-30T13:42:04.164817853Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:42:04.167266 containerd[1552]: time="2025-01-30T13:42:04.164825868Z" level=info msg="Start streaming server" Jan 30 13:42:04.167266 containerd[1552]: time="2025-01-30T13:42:04.165263308Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:42:04.167266 containerd[1552]: time="2025-01-30T13:42:04.165309885Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:42:04.167266 containerd[1552]: time="2025-01-30T13:42:04.165352335Z" level=info msg="containerd successfully booted in 0.041027s" Jan 30 13:42:04.165452 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:42:04.165919 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:42:04.169589 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:42:04.178116 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:42:04.192295 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:42:04.200877 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:42:04.203427 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:42:04.204845 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:42:04.339278 tar[1547]: linux-amd64/LICENSE Jan 30 13:42:04.339392 tar[1547]: linux-amd64/README.md Jan 30 13:42:04.352664 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:42:04.527652 systemd-networkd[1240]: eth0: Gained IPv6LL Jan 30 13:42:04.530773 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:42:04.532639 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:42:04.544825 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:42:04.548067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:04.551052 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:42:04.570444 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:42:04.570947 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:42:04.572825 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:42:04.576157 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:42:05.159101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:05.160985 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:42:05.162322 systemd[1]: Startup finished in 6.546s (kernel) + 4.256s (userspace) = 10.802s. Jan 30 13:42:05.163248 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:42:05.619324 kubelet[1653]: E0130 13:42:05.619203 1653 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:42:05.622970 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:42:05.623245 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:42:13.011959 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:42:13.019703 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:53940.service - OpenSSH per-connection server daemon (10.0.0.1:53940). Jan 30 13:42:13.062228 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 53940 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:42:13.064723 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:13.073018 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:42:13.082755 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:42:13.084569 systemd-logind[1532]: New session 1 of user core. Jan 30 13:42:13.095882 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:42:13.108731 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:42:13.111614 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:42:13.213416 systemd[1674]: Queued start job for default target default.target. Jan 30 13:42:13.213809 systemd[1674]: Created slice app.slice - User Application Slice. Jan 30 13:42:13.213840 systemd[1674]: Reached target paths.target - Paths. Jan 30 13:42:13.213854 systemd[1674]: Reached target timers.target - Timers. Jan 30 13:42:13.229573 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:42:13.237545 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:42:13.237610 systemd[1674]: Reached target sockets.target - Sockets. Jan 30 13:42:13.237627 systemd[1674]: Reached target basic.target - Basic System. Jan 30 13:42:13.237664 systemd[1674]: Reached target default.target - Main User Target. Jan 30 13:42:13.237695 systemd[1674]: Startup finished in 119ms. Jan 30 13:42:13.238595 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:42:13.240339 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:42:13.298760 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:53952.service - OpenSSH per-connection server daemon (10.0.0.1:53952). Jan 30 13:42:13.330809 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 53952 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:42:13.332477 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:13.336752 systemd-logind[1532]: New session 2 of user core. Jan 30 13:42:13.343840 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:42:13.399273 sshd[1686]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:13.407740 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:53954.service - OpenSSH per-connection server daemon (10.0.0.1:53954). Jan 30 13:42:13.408210 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:53952.service: Deactivated successfully. Jan 30 13:42:13.410715 systemd-logind[1532]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:42:13.411909 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:42:13.412918 systemd-logind[1532]: Removed session 2. Jan 30 13:42:13.443012 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 53954 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:42:13.445033 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:13.449347 systemd-logind[1532]: New session 3 of user core. Jan 30 13:42:13.459884 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:42:13.510268 sshd[1691]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:13.523795 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:53968.service - OpenSSH per-connection server daemon (10.0.0.1:53968). Jan 30 13:42:13.524464 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:53954.service: Deactivated successfully. Jan 30 13:42:13.527807 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:42:13.528630 systemd-logind[1532]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:42:13.531000 systemd-logind[1532]: Removed session 3. Jan 30 13:42:13.554964 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 53968 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:42:13.556670 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:13.561187 systemd-logind[1532]: New session 4 of user core. Jan 30 13:42:13.571771 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:42:13.626287 sshd[1699]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:13.634847 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:53972.service - OpenSSH per-connection server daemon (10.0.0.1:53972). Jan 30 13:42:13.635519 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:53968.service: Deactivated successfully. Jan 30 13:42:13.638479 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:42:13.639389 systemd-logind[1532]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:42:13.641393 systemd-logind[1532]: Removed session 4. Jan 30 13:42:13.664984 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 53972 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:42:13.666565 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:13.670581 systemd-logind[1532]: New session 5 of user core. Jan 30 13:42:13.680886 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:42:13.739303 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:42:13.739709 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:42:13.760999 sudo[1714]: pam_unix(sudo:session): session closed for user root Jan 30 13:42:13.763122 sshd[1707]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:13.771709 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:53980.service - OpenSSH per-connection server daemon (10.0.0.1:53980). Jan 30 13:42:13.772187 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:53972.service: Deactivated successfully. Jan 30 13:42:13.774353 systemd-logind[1532]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:42:13.775462 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:42:13.776435 systemd-logind[1532]: Removed session 5. Jan 30 13:42:13.803729 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 53980 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:42:13.805261 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:13.810050 systemd-logind[1532]: New session 6 of user core. Jan 30 13:42:13.828938 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:42:13.884867 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:42:13.885199 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:42:13.888845 sudo[1724]: pam_unix(sudo:session): session closed for user root Jan 30 13:42:13.897406 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:42:13.897925 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:42:13.918792 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:42:13.920865 auditctl[1727]: No rules Jan 30 13:42:13.922340 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:42:13.922719 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:42:13.924628 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:42:13.957862 augenrules[1746]: No rules Jan 30 13:42:13.959001 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:42:13.960329 sudo[1723]: pam_unix(sudo:session): session closed for user root Jan 30 13:42:13.962047 sshd[1716]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:13.979897 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:53982.service - OpenSSH per-connection server daemon (10.0.0.1:53982). Jan 30 13:42:13.980772 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:53980.service: Deactivated successfully. Jan 30 13:42:13.982621 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:42:13.983436 systemd-logind[1532]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:42:13.985079 systemd-logind[1532]: Removed session 6. Jan 30 13:42:14.009359 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 53982 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:42:14.010873 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:14.015248 systemd-logind[1532]: New session 7 of user core. Jan 30 13:42:14.027730 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:42:14.083212 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:42:14.083662 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:42:14.400764 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:42:14.401127 (dockerd)[1777]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:42:14.672352 dockerd[1777]: time="2025-01-30T13:42:14.672282870Z" level=info msg="Starting up" Jan 30 13:42:15.360981 dockerd[1777]: time="2025-01-30T13:42:15.360926082Z" level=info msg="Loading containers: start." Jan 30 13:42:15.465509 kernel: Initializing XFRM netlink socket Jan 30 13:42:15.541818 systemd-networkd[1240]: docker0: Link UP Jan 30 13:42:15.568066 dockerd[1777]: time="2025-01-30T13:42:15.568033382Z" level=info msg="Loading containers: done." Jan 30 13:42:15.582791 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3915384223-merged.mount: Deactivated successfully. Jan 30 13:42:15.656908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:42:15.666622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:15.812316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:15.837519 (kubelet)[1902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:42:15.880026 kubelet[1902]: E0130 13:42:15.879967 1902 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:42:15.887473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:42:15.887872 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:42:16.169264 dockerd[1777]: time="2025-01-30T13:42:16.169185236Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:42:16.169774 dockerd[1777]: time="2025-01-30T13:42:16.169380483Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:42:16.169774 dockerd[1777]: time="2025-01-30T13:42:16.169590537Z" level=info msg="Daemon has completed initialization" Jan 30 13:42:16.417812 dockerd[1777]: time="2025-01-30T13:42:16.417752940Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:42:16.419683 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:42:17.134114 containerd[1552]: time="2025-01-30T13:42:17.134063548Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:42:17.739465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1066282726.mount: Deactivated successfully. Jan 30 13:42:18.757338 containerd[1552]: time="2025-01-30T13:42:18.757275940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:18.758036 containerd[1552]: time="2025-01-30T13:42:18.757967627Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 13:42:18.759454 containerd[1552]: time="2025-01-30T13:42:18.759403490Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:18.762064 containerd[1552]: time="2025-01-30T13:42:18.762032851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:18.763001 containerd[1552]: time="2025-01-30T13:42:18.762968997Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.628863992s" Jan 30 13:42:18.763001 containerd[1552]: time="2025-01-30T13:42:18.763004073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:42:18.784306 containerd[1552]: time="2025-01-30T13:42:18.784131023Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:42:20.156091 containerd[1552]: time="2025-01-30T13:42:20.156009104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:20.156905 containerd[1552]: time="2025-01-30T13:42:20.156838169Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 13:42:20.158650 containerd[1552]: time="2025-01-30T13:42:20.158611825Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:20.161738 containerd[1552]: time="2025-01-30T13:42:20.161703553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:20.162734 containerd[1552]: time="2025-01-30T13:42:20.162678993Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.378507183s" Jan 30 13:42:20.162734 containerd[1552]: time="2025-01-30T13:42:20.162729628Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:42:20.187233 containerd[1552]: time="2025-01-30T13:42:20.187181183Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:42:21.322376 containerd[1552]: time="2025-01-30T13:42:21.322301349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:21.323270 containerd[1552]: time="2025-01-30T13:42:21.323223829Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 13:42:21.324675 containerd[1552]: time="2025-01-30T13:42:21.324648340Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:21.328879 containerd[1552]: time="2025-01-30T13:42:21.328837105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:21.329772 containerd[1552]: time="2025-01-30T13:42:21.329729509Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.142500938s" Jan 30 13:42:21.329832 containerd[1552]: time="2025-01-30T13:42:21.329779633Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:42:21.358455 containerd[1552]: time="2025-01-30T13:42:21.358405767Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:42:23.096894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169145884.mount: Deactivated successfully. Jan 30 13:42:24.617308 containerd[1552]: time="2025-01-30T13:42:24.617230677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:24.619005 containerd[1552]: time="2025-01-30T13:42:24.618887404Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:42:24.620960 containerd[1552]: time="2025-01-30T13:42:24.620914195Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:24.624746 containerd[1552]: time="2025-01-30T13:42:24.624678384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:24.625453 containerd[1552]: time="2025-01-30T13:42:24.625415437Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 3.26696231s" Jan 30 13:42:24.625453 containerd[1552]: time="2025-01-30T13:42:24.625448709Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:42:24.653123 containerd[1552]: time="2025-01-30T13:42:24.653069127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:42:25.207225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730231246.mount: Deactivated successfully. Jan 30 13:42:25.907335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:42:25.919662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:27.301559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:27.306224 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:42:27.344740 kubelet[2107]: E0130 13:42:27.344609 2107 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:42:27.349038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:42:27.349314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:42:27.598925 containerd[1552]: time="2025-01-30T13:42:27.598773965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:27.599835 containerd[1552]: time="2025-01-30T13:42:27.599788137Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:42:27.601220 containerd[1552]: time="2025-01-30T13:42:27.601180789Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:27.604110 containerd[1552]: time="2025-01-30T13:42:27.604063425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:27.605027 containerd[1552]: time="2025-01-30T13:42:27.604983671Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.95186499s" Jan 30 13:42:27.605027 containerd[1552]: time="2025-01-30T13:42:27.605021091Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:42:27.628280 containerd[1552]: time="2025-01-30T13:42:27.628235797Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:42:28.554126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659092333.mount: Deactivated successfully. Jan 30 13:42:28.562018 containerd[1552]: time="2025-01-30T13:42:28.561963650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:28.562775 containerd[1552]: time="2025-01-30T13:42:28.562731660Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 13:42:28.563846 containerd[1552]: time="2025-01-30T13:42:28.563804812Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:28.566287 containerd[1552]: time="2025-01-30T13:42:28.566262301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:28.566929 containerd[1552]: time="2025-01-30T13:42:28.566898324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 938.617162ms" Jan 30 13:42:28.566929 containerd[1552]: time="2025-01-30T13:42:28.566927038Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:42:28.589324 containerd[1552]: time="2025-01-30T13:42:28.589281971Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:42:29.276417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount253768839.mount: Deactivated successfully. Jan 30 13:42:30.950328 containerd[1552]: time="2025-01-30T13:42:30.950264326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:30.951332 containerd[1552]: time="2025-01-30T13:42:30.951265113Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 13:42:30.952592 containerd[1552]: time="2025-01-30T13:42:30.952561985Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:30.955474 containerd[1552]: time="2025-01-30T13:42:30.955433109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:30.956467 containerd[1552]: time="2025-01-30T13:42:30.956411133Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.367088355s" Jan 30 13:42:30.956467 containerd[1552]: time="2025-01-30T13:42:30.956441991Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:42:33.082592 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:33.092716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:33.109255 systemd[1]: Reloading requested from client PID 2263 ('systemctl') (unit session-7.scope)... Jan 30 13:42:33.109272 systemd[1]: Reloading... Jan 30 13:42:33.174516 zram_generator::config[2303]: No configuration found. Jan 30 13:42:33.738370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:42:33.809937 systemd[1]: Reloading finished in 700 ms. Jan 30 13:42:33.859659 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:42:33.859759 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:42:33.860090 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:33.862762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:34.011093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:34.016108 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:42:34.052788 kubelet[2363]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:42:34.052788 kubelet[2363]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:42:34.052788 kubelet[2363]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:42:34.053185 kubelet[2363]: I0130 13:42:34.052824 2363 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:42:34.419195 kubelet[2363]: I0130 13:42:34.419158 2363 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:42:34.419195 kubelet[2363]: I0130 13:42:34.419186 2363 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:42:34.419417 kubelet[2363]: I0130 13:42:34.419401 2363 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:42:34.431440 kubelet[2363]: I0130 13:42:34.431414 2363 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:42:34.432118 kubelet[2363]: E0130 13:42:34.432096 2363 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:34.442321 kubelet[2363]: I0130 13:42:34.442284 2363 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:42:34.443573 kubelet[2363]: I0130 13:42:34.443538 2363 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:42:34.443716 kubelet[2363]: I0130 13:42:34.443565 2363 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:42:34.443798 kubelet[2363]: I0130 13:42:34.443729 2363 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:42:34.443798 kubelet[2363]: I0130 13:42:34.443738 2363 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:42:34.443872 kubelet[2363]: I0130 13:42:34.443860 2363 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:42:34.444474 kubelet[2363]: I0130 13:42:34.444453 2363 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:42:34.444474 kubelet[2363]: I0130 13:42:34.444467 2363 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:42:34.444561 kubelet[2363]: I0130 13:42:34.444499 2363 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:42:34.444561 kubelet[2363]: I0130 13:42:34.444517 2363 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:42:34.448562 kubelet[2363]: W0130 13:42:34.448302 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:34.448562 kubelet[2363]: E0130 13:42:34.448361 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:34.448562 kubelet[2363]: W0130 13:42:34.448426 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:34.448562 kubelet[2363]: E0130 13:42:34.448466 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:34.449008 kubelet[2363]: I0130 13:42:34.448990 2363 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:42:34.450353 kubelet[2363]: I0130 13:42:34.450337 2363 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:42:34.450476 kubelet[2363]: W0130 13:42:34.450457 2363 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:42:34.451557 kubelet[2363]: I0130 13:42:34.451410 2363 server.go:1264] "Started kubelet" Jan 30 13:42:34.452454 kubelet[2363]: I0130 13:42:34.452312 2363 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:42:34.453009 kubelet[2363]: I0130 13:42:34.452743 2363 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:42:34.453009 kubelet[2363]: I0130 13:42:34.452782 2363 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:42:34.453009 kubelet[2363]: I0130 13:42:34.452872 2363 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:42:34.453788 kubelet[2363]: I0130 13:42:34.453768 2363 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:42:34.456720 kubelet[2363]: E0130 13:42:34.456594 2363 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:42:34.459009 kubelet[2363]: E0130 13:42:34.458980 2363 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:42:34.459057 kubelet[2363]: I0130 13:42:34.459045 2363 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:42:34.459611 kubelet[2363]: I0130 13:42:34.459203 2363 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:42:34.459611 kubelet[2363]: I0130 13:42:34.459280 2363 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:42:34.459788 kubelet[2363]: W0130 13:42:34.459746 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:34.459848 kubelet[2363]: E0130 13:42:34.459800 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:34.460047 kubelet[2363]: E0130 13:42:34.459750 2363 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7c3670d4f05a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:42:34.451382362 +0000 UTC m=+0.431260669,LastTimestamp:2025-01-30 13:42:34.451382362 +0000 UTC m=+0.431260669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:42:34.460145 kubelet[2363]: E0130 13:42:34.460110 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" Jan 30 13:42:34.460366 kubelet[2363]: I0130 13:42:34.460348 2363 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:42:34.460428 kubelet[2363]: I0130 13:42:34.460417 2363 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:42:34.461445 kubelet[2363]: I0130 13:42:34.461425 2363 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:42:34.469595 kubelet[2363]: I0130 13:42:34.469555 2363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:42:34.471050 kubelet[2363]: I0130 13:42:34.470733 2363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:42:34.471050 kubelet[2363]: I0130 13:42:34.470761 2363 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:42:34.471050 kubelet[2363]: I0130 13:42:34.470780 2363 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:42:34.471050 kubelet[2363]: E0130 13:42:34.470824 2363 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:42:34.483977 kubelet[2363]: W0130 13:42:34.483944 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:34.483977 kubelet[2363]: E0130 13:42:34.483979 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:34.489612 kubelet[2363]: I0130 13:42:34.489590 2363 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:42:34.489612 kubelet[2363]: I0130 13:42:34.489604 2363 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:42:34.489743 kubelet[2363]: I0130 13:42:34.489634 2363 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:42:34.560505 kubelet[2363]: I0130 13:42:34.560457 2363 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:42:34.560773 kubelet[2363]: E0130 13:42:34.560743 2363 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 30 13:42:34.571878 kubelet[2363]: E0130 13:42:34.571843 2363 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:42:34.661633 kubelet[2363]: E0130 13:42:34.661593 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" Jan 30 13:42:34.762070 kubelet[2363]: I0130 13:42:34.761971 2363 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:42:34.762322 kubelet[2363]: E0130 13:42:34.762296 2363 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 30 13:42:34.772563 kubelet[2363]: E0130 13:42:34.772525 2363 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:42:35.062590 kubelet[2363]: E0130 13:42:35.062431 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" Jan 30 13:42:35.149022 kubelet[2363]: I0130 13:42:35.148957 2363 policy_none.go:49] "None policy: Start" Jan 30 13:42:35.149729 kubelet[2363]: I0130 13:42:35.149690 2363 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:42:35.149729 kubelet[2363]: I0130 13:42:35.149721 2363 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:42:35.164167 kubelet[2363]: I0130 13:42:35.164139 2363 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:42:35.164471 kubelet[2363]: E0130 13:42:35.164445 2363 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 30 13:42:35.173004 kubelet[2363]: E0130 13:42:35.172778 2363 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:42:35.173207 kubelet[2363]: I0130 13:42:35.173167 2363 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:42:35.173396 kubelet[2363]: I0130 13:42:35.173365 2363 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:42:35.173514 kubelet[2363]: I0130 13:42:35.173467 2363 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:42:35.174517 kubelet[2363]: E0130 13:42:35.174504 2363 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:42:35.491431 kubelet[2363]: W0130 13:42:35.491363 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:35.491431 kubelet[2363]: E0130 13:42:35.491413 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:35.763069 kubelet[2363]: W0130 13:42:35.762938 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:35.763069 kubelet[2363]: E0130 13:42:35.762999 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:35.863922 kubelet[2363]: E0130 13:42:35.863857 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="1.6s" Jan 30 13:42:35.939098 kubelet[2363]: W0130 13:42:35.939010 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:35.939098 kubelet[2363]: E0130 13:42:35.939091 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:35.966260 kubelet[2363]: I0130 13:42:35.966175 2363 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:42:35.966716 kubelet[2363]: E0130 13:42:35.966639 2363 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 30 13:42:35.974241 kubelet[2363]: I0130 13:42:35.973872 2363 topology_manager.go:215] "Topology Admit Handler" podUID="ecacf7ee73d108415e87fe76c0445907" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:42:35.975357 kubelet[2363]: I0130 13:42:35.975231 2363 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:42:35.976262 kubelet[2363]: I0130 13:42:35.976225 2363 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:42:36.031825 kubelet[2363]: W0130 13:42:36.031669 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:36.031825 kubelet[2363]: E0130 13:42:36.031759 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:36.068158 kubelet[2363]: I0130 13:42:36.068095 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecacf7ee73d108415e87fe76c0445907-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecacf7ee73d108415e87fe76c0445907\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:36.068520 kubelet[2363]: I0130 13:42:36.068150 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:36.068520 kubelet[2363]: I0130 13:42:36.068184 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:36.068520 kubelet[2363]: I0130 13:42:36.068215 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:42:36.068520 kubelet[2363]: I0130 13:42:36.068234 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecacf7ee73d108415e87fe76c0445907-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecacf7ee73d108415e87fe76c0445907\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:36.068520 kubelet[2363]: I0130 13:42:36.068251 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecacf7ee73d108415e87fe76c0445907-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ecacf7ee73d108415e87fe76c0445907\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:36.068654 kubelet[2363]: I0130 13:42:36.068269 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:36.068654 kubelet[2363]: I0130 13:42:36.068289 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:36.068654 kubelet[2363]: I0130 13:42:36.068350 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:36.280675 kubelet[2363]: E0130 13:42:36.280623 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:36.281324 containerd[1552]: time="2025-01-30T13:42:36.281273897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 30 13:42:36.282481 kubelet[2363]: E0130 13:42:36.282386 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:36.282831 containerd[1552]: time="2025-01-30T13:42:36.282781137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ecacf7ee73d108415e87fe76c0445907,Namespace:kube-system,Attempt:0,}" Jan 30 13:42:36.284042 kubelet[2363]: E0130 13:42:36.284017 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:36.284468 containerd[1552]: time="2025-01-30T13:42:36.284437345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 30 13:42:36.469632 kubelet[2363]: E0130 13:42:36.469589 2363 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.26:6443: connect: connection refused Jan 30 13:42:37.465276 kubelet[2363]: E0130 13:42:37.465210 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="3.2s" Jan 30 13:42:37.567894 kubelet[2363]: I0130 13:42:37.567859 2363 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:42:37.568296 kubelet[2363]: E0130 13:42:37.568245 2363 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 30 13:42:37.624713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384718895.mount: Deactivated successfully. Jan 30 13:42:37.628981 containerd[1552]: time="2025-01-30T13:42:37.628913214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:42:37.632851 containerd[1552]: time="2025-01-30T13:42:37.632793830Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:42:37.633949 containerd[1552]: time="2025-01-30T13:42:37.633890408Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:42:37.634970 containerd[1552]: time="2025-01-30T13:42:37.634932500Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:42:37.636227 containerd[1552]: time="2025-01-30T13:42:37.636176421Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:42:37.636673 containerd[1552]: time="2025-01-30T13:42:37.636620153Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:42:37.637635 containerd[1552]: time="2025-01-30T13:42:37.637604805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:42:37.638782 containerd[1552]: time="2025-01-30T13:42:37.638740918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:42:37.639943 containerd[1552]: time="2025-01-30T13:42:37.639906958Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.358546116s" Jan 30 13:42:37.642104 containerd[1552]: time="2025-01-30T13:42:37.642065015Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.359190469s" Jan 30 13:42:37.644368 containerd[1552]: time="2025-01-30T13:42:37.644333124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.359821467s" Jan 30 13:42:37.804946 containerd[1552]: time="2025-01-30T13:42:37.804331351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:42:37.804946 containerd[1552]: time="2025-01-30T13:42:37.804377200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:42:37.804946 containerd[1552]: time="2025-01-30T13:42:37.804391166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:37.804946 containerd[1552]: time="2025-01-30T13:42:37.804465399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:37.805353 containerd[1552]: time="2025-01-30T13:42:37.804287025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:42:37.805353 containerd[1552]: time="2025-01-30T13:42:37.804351900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:42:37.805353 containerd[1552]: time="2025-01-30T13:42:37.804375766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:37.805353 containerd[1552]: time="2025-01-30T13:42:37.804984827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:37.805353 containerd[1552]: time="2025-01-30T13:42:37.804875808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:42:37.805353 containerd[1552]: time="2025-01-30T13:42:37.804948507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:42:37.805353 containerd[1552]: time="2025-01-30T13:42:37.804967524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:37.805353 containerd[1552]: time="2025-01-30T13:42:37.805097734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:37.871830 containerd[1552]: time="2025-01-30T13:42:37.871781240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ecacf7ee73d108415e87fe76c0445907,Namespace:kube-system,Attempt:0,} returns sandbox id \"15ecb702e36a6569ab6e3e82df3e92b3f5a507548cd1a778017ca75a5049839a\"" Jan 30 13:42:37.872835 containerd[1552]: time="2025-01-30T13:42:37.872776713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c420577bc0633ac7320a4cf8527524d7c94b6e98a87e0dc1ae1b55c863f6668\"" Jan 30 13:42:37.873468 kubelet[2363]: E0130 13:42:37.873441 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:37.873920 kubelet[2363]: E0130 13:42:37.873456 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:37.876873 containerd[1552]: time="2025-01-30T13:42:37.876846914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fbfe7f965e01fea0639d0c3f0e0028a07d57830dec94cef2d6048eb74739d94\"" Jan 30 13:42:37.877222 kubelet[2363]: E0130 13:42:37.877201 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:37.877811 containerd[1552]: time="2025-01-30T13:42:37.877711445Z" level=info msg="CreateContainer within sandbox \"15ecb702e36a6569ab6e3e82df3e92b3f5a507548cd1a778017ca75a5049839a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:42:37.878983 containerd[1552]: time="2025-01-30T13:42:37.878955927Z" level=info msg="CreateContainer within sandbox \"1fbfe7f965e01fea0639d0c3f0e0028a07d57830dec94cef2d6048eb74739d94\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:42:37.880108 containerd[1552]: time="2025-01-30T13:42:37.878767324Z" level=info msg="CreateContainer within sandbox \"1c420577bc0633ac7320a4cf8527524d7c94b6e98a87e0dc1ae1b55c863f6668\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:42:37.926132 containerd[1552]: time="2025-01-30T13:42:37.926080210Z" level=info msg="CreateContainer within sandbox \"1fbfe7f965e01fea0639d0c3f0e0028a07d57830dec94cef2d6048eb74739d94\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ffaa621163865e2b4ead86d485b1c2ea644f108fd0a846ee2c6d1f745a61437\"" Jan 30 13:42:37.926704 containerd[1552]: time="2025-01-30T13:42:37.926656256Z" level=info msg="StartContainer for \"5ffaa621163865e2b4ead86d485b1c2ea644f108fd0a846ee2c6d1f745a61437\"" Jan 30 13:42:37.928291 containerd[1552]: time="2025-01-30T13:42:37.928255789Z" level=info msg="CreateContainer within sandbox \"15ecb702e36a6569ab6e3e82df3e92b3f5a507548cd1a778017ca75a5049839a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b3fc97fcaf30afc4751f6335f002e49554d7334b071c56a0d4b8881ddb892d4b\"" Jan 30 13:42:37.928592 containerd[1552]: time="2025-01-30T13:42:37.928562540Z" level=info msg="StartContainer for \"b3fc97fcaf30afc4751f6335f002e49554d7334b071c56a0d4b8881ddb892d4b\"" Jan 30 13:42:37.931809 containerd[1552]: time="2025-01-30T13:42:37.931756817Z" level=info msg="CreateContainer within sandbox \"1c420577bc0633ac7320a4cf8527524d7c94b6e98a87e0dc1ae1b55c863f6668\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a66bd5b573442dad2a221ef112b25c28ddb4b8aa56407d430f6af0af8009f298\"" Jan 30 13:42:37.933008 containerd[1552]: time="2025-01-30T13:42:37.932081370Z" level=info msg="StartContainer for \"a66bd5b573442dad2a221ef112b25c28ddb4b8aa56407d430f6af0af8009f298\"" Jan 30 13:42:37.999664 containerd[1552]: time="2025-01-30T13:42:37.999611164Z" level=info msg="StartContainer for \"5ffaa621163865e2b4ead86d485b1c2ea644f108fd0a846ee2c6d1f745a61437\" returns successfully" Jan 30 13:42:37.999803 containerd[1552]: time="2025-01-30T13:42:37.999738278Z" level=info msg="StartContainer for \"a66bd5b573442dad2a221ef112b25c28ddb4b8aa56407d430f6af0af8009f298\" returns successfully" Jan 30 13:42:37.999827 containerd[1552]: time="2025-01-30T13:42:37.999812440Z" level=info msg="StartContainer for \"b3fc97fcaf30afc4751f6335f002e49554d7334b071c56a0d4b8881ddb892d4b\" returns successfully" Jan 30 13:42:38.494388 kubelet[2363]: E0130 13:42:38.494326 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:38.499899 kubelet[2363]: E0130 13:42:38.499874 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:38.503088 kubelet[2363]: E0130 13:42:38.503061 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:39.451195 kubelet[2363]: I0130 13:42:39.451149 2363 apiserver.go:52] "Watching apiserver" Jan 30 13:42:39.459980 kubelet[2363]: I0130 13:42:39.459944 2363 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:42:39.502931 kubelet[2363]: E0130 13:42:39.502905 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:39.503337 kubelet[2363]: E0130 13:42:39.503003 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:39.503337 kubelet[2363]: E0130 13:42:39.503074 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:39.648129 kubelet[2363]: E0130 13:42:39.648087 2363 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 30 13:42:40.013222 kubelet[2363]: E0130 13:42:40.013158 2363 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 30 13:42:40.684235 kubelet[2363]: E0130 13:42:40.684182 2363 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:42:40.705173 kubelet[2363]: E0130 13:42:40.705148 2363 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 30 13:42:40.769534 kubelet[2363]: I0130 13:42:40.769502 2363 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:42:40.833788 kubelet[2363]: I0130 13:42:40.833757 2363 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:42:41.947968 systemd[1]: Reloading requested from client PID 2644 ('systemctl') (unit session-7.scope)... Jan 30 13:42:41.947981 systemd[1]: Reloading... Jan 30 13:42:42.022520 zram_generator::config[2686]: No configuration found. Jan 30 13:42:42.140367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:42:42.222052 systemd[1]: Reloading finished in 273 ms. Jan 30 13:42:42.259975 kubelet[2363]: I0130 13:42:42.259915 2363 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:42:42.259997 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:42.282942 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:42:42.283364 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:42.288779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:42.425809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:42.430537 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:42:42.467709 kubelet[2738]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:42:42.467709 kubelet[2738]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:42:42.467709 kubelet[2738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:42:42.468142 kubelet[2738]: I0130 13:42:42.467759 2738 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:42:42.472036 kubelet[2738]: I0130 13:42:42.472009 2738 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:42:42.472036 kubelet[2738]: I0130 13:42:42.472029 2738 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:42:42.472263 kubelet[2738]: I0130 13:42:42.472197 2738 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:42:42.473473 kubelet[2738]: I0130 13:42:42.473447 2738 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:42:42.474501 kubelet[2738]: I0130 13:42:42.474458 2738 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:42:42.482155 kubelet[2738]: I0130 13:42:42.482132 2738 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:42:42.482706 kubelet[2738]: I0130 13:42:42.482676 2738 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:42:42.482838 kubelet[2738]: I0130 13:42:42.482704 2738 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:42:42.482994 kubelet[2738]: I0130 13:42:42.482855 2738 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:42:42.482994 kubelet[2738]: I0130 13:42:42.482864 2738 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:42:42.482994 kubelet[2738]: I0130 13:42:42.482902 2738 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:42:42.482994 kubelet[2738]: I0130 13:42:42.482979 2738 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:42:42.482994 kubelet[2738]: I0130 13:42:42.482988 2738 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:42:42.483094 kubelet[2738]: I0130 13:42:42.483008 2738 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:42:42.483094 kubelet[2738]: I0130 13:42:42.483024 2738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:42:42.483826 kubelet[2738]: I0130 13:42:42.483528 2738 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:42:42.483826 kubelet[2738]: I0130 13:42:42.483728 2738 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:42:42.484188 kubelet[2738]: I0130 13:42:42.484168 2738 server.go:1264] "Started kubelet" Jan 30 13:42:42.484750 kubelet[2738]: I0130 13:42:42.484700 2738 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:42:42.485064 kubelet[2738]: I0130 13:42:42.485049 2738 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:42:42.485168 kubelet[2738]: I0130 13:42:42.485151 2738 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:42:42.486271 kubelet[2738]: I0130 13:42:42.486255 2738 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:42:42.489013 kubelet[2738]: I0130 13:42:42.488991 2738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:42:42.495898 kubelet[2738]: E0130 13:42:42.495233 2738 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:42:42.495898 kubelet[2738]: I0130 13:42:42.495283 2738 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:42:42.495898 kubelet[2738]: I0130 13:42:42.495405 2738 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:42:42.495898 kubelet[2738]: E0130 13:42:42.495464 2738 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:42:42.495898 kubelet[2738]: I0130 13:42:42.495605 2738 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:42:42.496850 kubelet[2738]: I0130 13:42:42.496728 2738 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:42:42.496850 kubelet[2738]: I0130 13:42:42.496825 2738 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:42:42.498909 kubelet[2738]: I0130 13:42:42.498887 2738 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:42:42.503035 kubelet[2738]: I0130 13:42:42.502990 2738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:42:42.506497 kubelet[2738]: I0130 13:42:42.504457 2738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:42:42.506497 kubelet[2738]: I0130 13:42:42.504548 2738 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:42:42.506497 kubelet[2738]: I0130 13:42:42.504568 2738 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:42:42.506497 kubelet[2738]: E0130 13:42:42.504634 2738 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:42:42.547159 kubelet[2738]: I0130 13:42:42.547132 2738 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:42:42.547331 kubelet[2738]: I0130 13:42:42.547318 2738 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:42:42.547396 kubelet[2738]: I0130 13:42:42.547386 2738 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:42:42.547679 kubelet[2738]: I0130 13:42:42.547663 2738 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:42:42.547756 kubelet[2738]: I0130 13:42:42.547732 2738 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:42:42.547813 kubelet[2738]: I0130 13:42:42.547804 2738 policy_none.go:49] "None policy: Start" Jan 30 13:42:42.549652 kubelet[2738]: I0130 13:42:42.549610 2738 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:42:42.549652 kubelet[2738]: I0130 13:42:42.549647 2738 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:42:42.549865 kubelet[2738]: I0130 13:42:42.549847 2738 state_mem.go:75] "Updated machine memory state" Jan 30 13:42:42.552128 kubelet[2738]: I0130 13:42:42.552099 2738 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:42:42.552371 kubelet[2738]: I0130 13:42:42.552329 2738 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:42:42.552460 kubelet[2738]: I0130 13:42:42.552443 2738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:42:42.600623 kubelet[2738]: I0130 13:42:42.600350 2738 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:42:42.605248 kubelet[2738]: I0130 13:42:42.605192 2738 topology_manager.go:215] "Topology Admit Handler" podUID="ecacf7ee73d108415e87fe76c0445907" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:42:42.605383 kubelet[2738]: I0130 13:42:42.605293 2738 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:42:42.605383 kubelet[2738]: I0130 13:42:42.605353 2738 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:42:42.607432 kubelet[2738]: I0130 13:42:42.606909 2738 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 30 13:42:42.607432 kubelet[2738]: I0130 13:42:42.606968 2738 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:42:42.697036 kubelet[2738]: I0130 13:42:42.696988 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecacf7ee73d108415e87fe76c0445907-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecacf7ee73d108415e87fe76c0445907\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:42.697036 kubelet[2738]: I0130 13:42:42.697024 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecacf7ee73d108415e87fe76c0445907-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecacf7ee73d108415e87fe76c0445907\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:42.697036 kubelet[2738]: I0130 13:42:42.697043 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecacf7ee73d108415e87fe76c0445907-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ecacf7ee73d108415e87fe76c0445907\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:42.697234 kubelet[2738]: I0130 13:42:42.697061 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:42.697234 kubelet[2738]: I0130 13:42:42.697083 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:42.697234 kubelet[2738]: I0130 13:42:42.697103 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:42.697234 kubelet[2738]: I0130 13:42:42.697126 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:42:42.697234 kubelet[2738]: I0130 13:42:42.697185 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:42.697354 kubelet[2738]: I0130 13:42:42.697234 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:42.916798 kubelet[2738]: E0130 13:42:42.916691 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:42.916798 kubelet[2738]: E0130 13:42:42.916754 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:42.916798 kubelet[2738]: E0130 13:42:42.916803 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:43.487510 kubelet[2738]: I0130 13:42:43.484597 2738 apiserver.go:52] "Watching apiserver" Jan 30 13:42:43.496000 kubelet[2738]: I0130 13:42:43.495960 2738 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:42:43.513831 kubelet[2738]: E0130 13:42:43.513798 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:43.514611 kubelet[2738]: E0130 13:42:43.514549 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:43.582627 kubelet[2738]: E0130 13:42:43.582006 2738 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:43.583322 kubelet[2738]: E0130 13:42:43.583284 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:43.590277 kubelet[2738]: I0130 13:42:43.590211 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.590188519 podStartE2EDuration="1.590188519s" podCreationTimestamp="2025-01-30 13:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:42:43.582292013 +0000 UTC m=+1.148044085" watchObservedRunningTime="2025-01-30 13:42:43.590188519 +0000 UTC m=+1.155940591" Jan 30 13:42:43.597065 kubelet[2738]: I0130 13:42:43.596983 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5969666230000001 podStartE2EDuration="1.596966623s" podCreationTimestamp="2025-01-30 13:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:42:43.590434809 +0000 UTC m=+1.156186891" watchObservedRunningTime="2025-01-30 13:42:43.596966623 +0000 UTC m=+1.162718695" Jan 30 13:42:44.516051 kubelet[2738]: E0130 13:42:44.516020 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:47.182105 kubelet[2738]: E0130 13:42:47.182037 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:47.194475 kubelet[2738]: I0130 13:42:47.194426 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.19440046 podStartE2EDuration="5.19440046s" podCreationTimestamp="2025-01-30 13:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:42:43.597028962 +0000 UTC m=+1.162781034" watchObservedRunningTime="2025-01-30 13:42:47.19440046 +0000 UTC m=+4.760152532" Jan 30 13:42:47.520297 kubelet[2738]: E0130 13:42:47.520183 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:47.615042 sudo[1759]: pam_unix(sudo:session): session closed for user root Jan 30 13:42:47.617292 sshd[1753]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:47.621697 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:53982.service: Deactivated successfully. Jan 30 13:42:47.623816 systemd-logind[1532]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:42:47.623915 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:42:47.625013 systemd-logind[1532]: Removed session 7. Jan 30 13:42:48.163234 kubelet[2738]: E0130 13:42:48.163188 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:48.521854 kubelet[2738]: E0130 13:42:48.521709 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:49.360432 update_engine[1534]: I20250130 13:42:49.360366 1534 update_attempter.cc:509] Updating boot flags... Jan 30 13:42:49.432628 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2833) Jan 30 13:42:49.462620 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2831) Jan 30 13:42:50.349424 kubelet[2738]: E0130 13:42:50.349354 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:50.525180 kubelet[2738]: E0130 13:42:50.525149 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:56.528094 kubelet[2738]: I0130 13:42:56.528064 2738 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:42:56.528682 containerd[1552]: time="2025-01-30T13:42:56.528447894Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:42:56.529010 kubelet[2738]: I0130 13:42:56.528685 2738 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:42:56.574953 kubelet[2738]: I0130 13:42:56.574906 2738 topology_manager.go:215] "Topology Admit Handler" podUID="210c22c2-4cdd-4273-acd2-aa67ea8b298d" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-8l4nd" Jan 30 13:42:56.639981 kubelet[2738]: I0130 13:42:56.639937 2738 topology_manager.go:215] "Topology Admit Handler" podUID="a9052fa2-5d57-48c9-b0f2-4e2f9b0f11fb" podNamespace="kube-system" podName="kube-proxy-htrc8" Jan 30 13:42:56.689777 kubelet[2738]: I0130 13:42:56.689712 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/210c22c2-4cdd-4273-acd2-aa67ea8b298d-var-lib-calico\") pod \"tigera-operator-7bc55997bb-8l4nd\" (UID: \"210c22c2-4cdd-4273-acd2-aa67ea8b298d\") " pod="tigera-operator/tigera-operator-7bc55997bb-8l4nd" Jan 30 13:42:56.689777 kubelet[2738]: I0130 13:42:56.689760 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxhpr\" (UniqueName: \"kubernetes.io/projected/210c22c2-4cdd-4273-acd2-aa67ea8b298d-kube-api-access-hxhpr\") pod \"tigera-operator-7bc55997bb-8l4nd\" (UID: \"210c22c2-4cdd-4273-acd2-aa67ea8b298d\") " pod="tigera-operator/tigera-operator-7bc55997bb-8l4nd" Jan 30 13:42:56.790803 kubelet[2738]: I0130 13:42:56.790634 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9052fa2-5d57-48c9-b0f2-4e2f9b0f11fb-xtables-lock\") pod \"kube-proxy-htrc8\" (UID: \"a9052fa2-5d57-48c9-b0f2-4e2f9b0f11fb\") " pod="kube-system/kube-proxy-htrc8" Jan 30 13:42:56.790803 kubelet[2738]: I0130 13:42:56.790681 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a9052fa2-5d57-48c9-b0f2-4e2f9b0f11fb-kube-proxy\") pod \"kube-proxy-htrc8\" (UID: \"a9052fa2-5d57-48c9-b0f2-4e2f9b0f11fb\") " pod="kube-system/kube-proxy-htrc8" Jan 30 13:42:56.790803 kubelet[2738]: I0130 13:42:56.790695 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9052fa2-5d57-48c9-b0f2-4e2f9b0f11fb-lib-modules\") pod \"kube-proxy-htrc8\" (UID: \"a9052fa2-5d57-48c9-b0f2-4e2f9b0f11fb\") " pod="kube-system/kube-proxy-htrc8" Jan 30 13:42:56.790803 kubelet[2738]: I0130 13:42:56.790744 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs4cc\" (UniqueName: \"kubernetes.io/projected/a9052fa2-5d57-48c9-b0f2-4e2f9b0f11fb-kube-api-access-cs4cc\") pod \"kube-proxy-htrc8\" (UID: \"a9052fa2-5d57-48c9-b0f2-4e2f9b0f11fb\") " pod="kube-system/kube-proxy-htrc8" Jan 30 13:42:56.880856 containerd[1552]: time="2025-01-30T13:42:56.880798845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8l4nd,Uid:210c22c2-4cdd-4273-acd2-aa67ea8b298d,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:42:56.906900 containerd[1552]: time="2025-01-30T13:42:56.906807203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:42:56.906900 containerd[1552]: time="2025-01-30T13:42:56.906859261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:42:56.906900 containerd[1552]: time="2025-01-30T13:42:56.906871765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:56.907099 containerd[1552]: time="2025-01-30T13:42:56.906973167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:56.944292 kubelet[2738]: E0130 13:42:56.944258 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:56.944864 containerd[1552]: time="2025-01-30T13:42:56.944826538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htrc8,Uid:a9052fa2-5d57-48c9-b0f2-4e2f9b0f11fb,Namespace:kube-system,Attempt:0,}" Jan 30 13:42:56.960373 containerd[1552]: time="2025-01-30T13:42:56.960325594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8l4nd,Uid:210c22c2-4cdd-4273-acd2-aa67ea8b298d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d24882b8d14f428e55cc85df84174413dbd3cdd726039ae6f885a21284e27baa\"" Jan 30 13:42:56.962148 containerd[1552]: time="2025-01-30T13:42:56.962128110Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:42:56.972448 containerd[1552]: time="2025-01-30T13:42:56.972364064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:42:56.972448 containerd[1552]: time="2025-01-30T13:42:56.972418698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:42:56.972448 containerd[1552]: time="2025-01-30T13:42:56.972431943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:56.972646 containerd[1552]: time="2025-01-30T13:42:56.972548813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:57.010885 containerd[1552]: time="2025-01-30T13:42:57.010820636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htrc8,Uid:a9052fa2-5d57-48c9-b0f2-4e2f9b0f11fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bd86d71560c6be4024f8bc752592ee23b6d72883426648f81f17edb33be9b28\"" Jan 30 13:42:57.011527 kubelet[2738]: E0130 13:42:57.011504 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:57.013899 containerd[1552]: time="2025-01-30T13:42:57.013865806Z" level=info msg="CreateContainer within sandbox \"3bd86d71560c6be4024f8bc752592ee23b6d72883426648f81f17edb33be9b28\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:42:57.040056 containerd[1552]: time="2025-01-30T13:42:57.040010419Z" level=info msg="CreateContainer within sandbox \"3bd86d71560c6be4024f8bc752592ee23b6d72883426648f81f17edb33be9b28\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d1a14f6118859724f21ae53cdac6cf1f55f484e7e3b5a57d0f540ec888342eca\"" Jan 30 13:42:57.040750 containerd[1552]: time="2025-01-30T13:42:57.040419421Z" level=info msg="StartContainer for \"d1a14f6118859724f21ae53cdac6cf1f55f484e7e3b5a57d0f540ec888342eca\"" Jan 30 13:42:57.238777 containerd[1552]: time="2025-01-30T13:42:57.238729614Z" level=info msg="StartContainer for \"d1a14f6118859724f21ae53cdac6cf1f55f484e7e3b5a57d0f540ec888342eca\" returns successfully" Jan 30 13:42:57.537788 kubelet[2738]: E0130 13:42:57.537504 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:57.801777 kubelet[2738]: I0130 13:42:57.801636 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-htrc8" podStartSLOduration=1.8016214210000001 podStartE2EDuration="1.801621421s" podCreationTimestamp="2025-01-30 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:42:57.801105157 +0000 UTC m=+15.366857229" watchObservedRunningTime="2025-01-30 13:42:57.801621421 +0000 UTC m=+15.367373493" Jan 30 13:43:01.722241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266898053.mount: Deactivated successfully. Jan 30 13:43:01.989354 containerd[1552]: time="2025-01-30T13:43:01.989231719Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:01.990044 containerd[1552]: time="2025-01-30T13:43:01.989970332Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:43:01.991024 containerd[1552]: time="2025-01-30T13:43:01.990987961Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:01.993100 containerd[1552]: time="2025-01-30T13:43:01.993068963Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:01.993788 containerd[1552]: time="2025-01-30T13:43:01.993757862Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 5.031452718s" Jan 30 13:43:01.993788 containerd[1552]: time="2025-01-30T13:43:01.993787919Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:43:01.998375 containerd[1552]: time="2025-01-30T13:43:01.998343587Z" level=info msg="CreateContainer within sandbox \"d24882b8d14f428e55cc85df84174413dbd3cdd726039ae6f885a21284e27baa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:43:02.010181 containerd[1552]: time="2025-01-30T13:43:02.010128818Z" level=info msg="CreateContainer within sandbox \"d24882b8d14f428e55cc85df84174413dbd3cdd726039ae6f885a21284e27baa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3776586fbb4db3841d5c25b9d868e7ffac870eb0f73d15b72b9241002f2ff3ec\"" Jan 30 13:43:02.010615 containerd[1552]: time="2025-01-30T13:43:02.010595738Z" level=info msg="StartContainer for \"3776586fbb4db3841d5c25b9d868e7ffac870eb0f73d15b72b9241002f2ff3ec\"" Jan 30 13:43:02.059767 containerd[1552]: time="2025-01-30T13:43:02.059729243Z" level=info msg="StartContainer for \"3776586fbb4db3841d5c25b9d868e7ffac870eb0f73d15b72b9241002f2ff3ec\" returns successfully" Jan 30 13:43:05.258280 kubelet[2738]: I0130 13:43:05.258198 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-8l4nd" podStartSLOduration=4.22325629 podStartE2EDuration="9.258174964s" podCreationTimestamp="2025-01-30 13:42:56 +0000 UTC" firstStartedPulling="2025-01-30 13:42:56.961743954 +0000 UTC m=+14.527496026" lastFinishedPulling="2025-01-30 13:43:01.996662628 +0000 UTC m=+19.562414700" observedRunningTime="2025-01-30 13:43:02.592940283 +0000 UTC m=+20.158692355" watchObservedRunningTime="2025-01-30 13:43:05.258174964 +0000 UTC m=+22.823927036" Jan 30 13:43:05.268221 kubelet[2738]: I0130 13:43:05.268175 2738 topology_manager.go:215] "Topology Admit Handler" podUID="5ff6056e-508e-42fc-b6ff-740e335b1cc7" podNamespace="calico-system" podName="calico-typha-6789d6b488-l782j" Jan 30 13:43:05.295772 kubelet[2738]: I0130 13:43:05.295728 2738 topology_manager.go:215] "Topology Admit Handler" podUID="48012f06-aaf0-470c-bf89-7cc103f69f53" podNamespace="calico-system" podName="calico-node-lnp8c" Jan 30 13:43:05.399281 kubelet[2738]: I0130 13:43:05.399237 2738 topology_manager.go:215] "Topology Admit Handler" podUID="72a87a81-6fc8-4427-8a91-308c02047854" podNamespace="calico-system" podName="csi-node-driver-w9jzb" Jan 30 13:43:05.406519 kubelet[2738]: E0130 13:43:05.405902 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9jzb" podUID="72a87a81-6fc8-4427-8a91-308c02047854" Jan 30 13:43:05.451343 kubelet[2738]: I0130 13:43:05.451269 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff6056e-508e-42fc-b6ff-740e335b1cc7-tigera-ca-bundle\") pod \"calico-typha-6789d6b488-l782j\" (UID: \"5ff6056e-508e-42fc-b6ff-740e335b1cc7\") " pod="calico-system/calico-typha-6789d6b488-l782j" Jan 30 13:43:05.451343 kubelet[2738]: I0130 13:43:05.451327 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/48012f06-aaf0-470c-bf89-7cc103f69f53-policysync\") pod \"calico-node-lnp8c\" (UID: \"48012f06-aaf0-470c-bf89-7cc103f69f53\") " pod="calico-system/calico-node-lnp8c" Jan 30 13:43:05.451343 kubelet[2738]: I0130 13:43:05.451350 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48012f06-aaf0-470c-bf89-7cc103f69f53-tigera-ca-bundle\") pod \"calico-node-lnp8c\" (UID: \"48012f06-aaf0-470c-bf89-7cc103f69f53\") " pod="calico-system/calico-node-lnp8c" Jan 30 13:43:05.451643 kubelet[2738]: I0130 13:43:05.451378 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/48012f06-aaf0-470c-bf89-7cc103f69f53-node-certs\") pod \"calico-node-lnp8c\" (UID: \"48012f06-aaf0-470c-bf89-7cc103f69f53\") " pod="calico-system/calico-node-lnp8c" Jan 30 13:43:05.451643 kubelet[2738]: I0130 13:43:05.451416 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/48012f06-aaf0-470c-bf89-7cc103f69f53-cni-bin-dir\") pod \"calico-node-lnp8c\" (UID: \"48012f06-aaf0-470c-bf89-7cc103f69f53\") " pod="calico-system/calico-node-lnp8c" Jan 30 13:43:05.451643 kubelet[2738]: I0130 13:43:05.451452 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/48012f06-aaf0-470c-bf89-7cc103f69f53-var-lib-calico\") pod \"calico-node-lnp8c\" (UID: \"48012f06-aaf0-470c-bf89-7cc103f69f53\") " pod="calico-system/calico-node-lnp8c" Jan 30 13:43:05.451643 kubelet[2738]: I0130 13:43:05.451568 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/48012f06-aaf0-470c-bf89-7cc103f69f53-var-run-calico\") pod \"calico-node-lnp8c\" (UID: \"48012f06-aaf0-470c-bf89-7cc103f69f53\") " pod="calico-system/calico-node-lnp8c" Jan 30 13:43:05.451643 kubelet[2738]: I0130 13:43:05.451606 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdmqm\" (UniqueName: \"kubernetes.io/projected/5ff6056e-508e-42fc-b6ff-740e335b1cc7-kube-api-access-fdmqm\") pod \"calico-typha-6789d6b488-l782j\" (UID: \"5ff6056e-508e-42fc-b6ff-740e335b1cc7\") " pod="calico-system/calico-typha-6789d6b488-l782j" Jan 30 13:43:05.451844 kubelet[2738]: I0130 13:43:05.451628 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48012f06-aaf0-470c-bf89-7cc103f69f53-lib-modules\") pod \"calico-node-lnp8c\" (UID: \"48012f06-aaf0-470c-bf89-7cc103f69f53\") " pod="calico-system/calico-node-lnp8c" Jan 30 13:43:05.451844 kubelet[2738]: I0130 13:43:05.451648 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/48012f06-aaf0-470c-bf89-7cc103f69f53-cni-net-dir\") pod \"calico-node-lnp8c\" (UID: \"48012f06-aaf0-470c-bf89-7cc103f69f53\") " pod="calico-system/calico-node-lnp8c" Jan 30 13:43:05.451844 kubelet[2738]: I0130 13:43:05.451764 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fszn\" (UniqueName: \"kubernetes.io/projected/48012f06-aaf0-470c-bf89-7cc103f69f53-kube-api-access-8fszn\") pod \"calico-node-lnp8c\" (UID: \"48012f06-aaf0-470c-bf89-7cc103f69f53\") " pod="calico-system/calico-node-lnp8c" Jan 30 13:43:05.451844 kubelet[2738]: I0130 13:43:05.451810 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5ff6056e-508e-42fc-b6ff-740e335b1cc7-typha-certs\") pod \"calico-typha-6789d6b488-l782j\" (UID: \"5ff6056e-508e-42fc-b6ff-740e335b1cc7\") " pod="calico-system/calico-typha-6789d6b488-l782j" Jan 30 13:43:05.451844 kubelet[2738]: I0130 13:43:05.451837 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/48012f06-aaf0-470c-bf89-7cc103f69f53-cni-log-dir\") pod \"calico-node-lnp8c\" (UID: \"48012f06-aaf0-470c-bf89-7cc103f69f53\") " pod="calico-system/calico-node-lnp8c" Jan 30 13:43:05.451996 kubelet[2738]: I0130 13:43:05.451857 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/48012f06-aaf0-470c-bf89-7cc103f69f53-flexvol-driver-host\") pod \"calico-node-lnp8c\" (UID: \"48012f06-aaf0-470c-bf89-7cc103f69f53\") " pod="calico-system/calico-node-lnp8c" Jan 30 13:43:05.451996 kubelet[2738]: I0130 13:43:05.451876 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48012f06-aaf0-470c-bf89-7cc103f69f53-xtables-lock\") pod \"calico-node-lnp8c\" (UID: \"48012f06-aaf0-470c-bf89-7cc103f69f53\") " pod="calico-system/calico-node-lnp8c" Jan 30 13:43:05.553355 kubelet[2738]: I0130 13:43:05.553028 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/72a87a81-6fc8-4427-8a91-308c02047854-socket-dir\") pod \"csi-node-driver-w9jzb\" (UID: \"72a87a81-6fc8-4427-8a91-308c02047854\") " pod="calico-system/csi-node-driver-w9jzb" Jan 30 13:43:05.553355 kubelet[2738]: I0130 13:43:05.553102 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72a87a81-6fc8-4427-8a91-308c02047854-kubelet-dir\") pod \"csi-node-driver-w9jzb\" (UID: \"72a87a81-6fc8-4427-8a91-308c02047854\") " pod="calico-system/csi-node-driver-w9jzb" Jan 30 13:43:05.553355 kubelet[2738]: I0130 13:43:05.553126 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/72a87a81-6fc8-4427-8a91-308c02047854-registration-dir\") pod \"csi-node-driver-w9jzb\" (UID: \"72a87a81-6fc8-4427-8a91-308c02047854\") " pod="calico-system/csi-node-driver-w9jzb" Jan 30 13:43:05.553355 kubelet[2738]: I0130 13:43:05.553215 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/72a87a81-6fc8-4427-8a91-308c02047854-varrun\") pod \"csi-node-driver-w9jzb\" (UID: \"72a87a81-6fc8-4427-8a91-308c02047854\") " pod="calico-system/csi-node-driver-w9jzb" Jan 30 13:43:05.554807 kubelet[2738]: E0130 13:43:05.554783 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.554807 kubelet[2738]: W0130 13:43:05.554802 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.554884 kubelet[2738]: E0130 13:43:05.554827 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.555098 kubelet[2738]: E0130 13:43:05.555065 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.555098 kubelet[2738]: W0130 13:43:05.555086 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.555098 kubelet[2738]: E0130 13:43:05.555096 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.555446 kubelet[2738]: E0130 13:43:05.555346 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.555446 kubelet[2738]: W0130 13:43:05.555358 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.555446 kubelet[2738]: E0130 13:43:05.555367 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.555642 kubelet[2738]: E0130 13:43:05.555594 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.555642 kubelet[2738]: W0130 13:43:05.555602 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.555642 kubelet[2738]: E0130 13:43:05.555610 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.558501 kubelet[2738]: E0130 13:43:05.555765 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.558501 kubelet[2738]: W0130 13:43:05.555774 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.558501 kubelet[2738]: E0130 13:43:05.555782 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.558501 kubelet[2738]: E0130 13:43:05.556006 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.558501 kubelet[2738]: W0130 13:43:05.556014 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.558501 kubelet[2738]: E0130 13:43:05.556023 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.558501 kubelet[2738]: E0130 13:43:05.556195 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.558501 kubelet[2738]: W0130 13:43:05.556202 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.558501 kubelet[2738]: E0130 13:43:05.556210 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.558501 kubelet[2738]: E0130 13:43:05.556975 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.558750 kubelet[2738]: W0130 13:43:05.556990 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.558750 kubelet[2738]: E0130 13:43:05.557006 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.558750 kubelet[2738]: I0130 13:43:05.557025 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqxk\" (UniqueName: \"kubernetes.io/projected/72a87a81-6fc8-4427-8a91-308c02047854-kube-api-access-xkqxk\") pod \"csi-node-driver-w9jzb\" (UID: \"72a87a81-6fc8-4427-8a91-308c02047854\") " pod="calico-system/csi-node-driver-w9jzb" Jan 30 13:43:05.558750 kubelet[2738]: E0130 13:43:05.557227 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.558750 kubelet[2738]: W0130 13:43:05.557235 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.558750 kubelet[2738]: E0130 13:43:05.557243 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.558750 kubelet[2738]: E0130 13:43:05.557411 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.558750 kubelet[2738]: W0130 13:43:05.557418 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.558750 kubelet[2738]: E0130 13:43:05.557427 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.558939 kubelet[2738]: E0130 13:43:05.557605 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.558939 kubelet[2738]: W0130 13:43:05.557615 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.558939 kubelet[2738]: E0130 13:43:05.557623 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.558939 kubelet[2738]: E0130 13:43:05.557775 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.558939 kubelet[2738]: W0130 13:43:05.557781 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.558939 kubelet[2738]: E0130 13:43:05.557790 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.558939 kubelet[2738]: E0130 13:43:05.557932 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.558939 kubelet[2738]: W0130 13:43:05.557939 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.558939 kubelet[2738]: E0130 13:43:05.557946 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.558939 kubelet[2738]: E0130 13:43:05.558138 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.559154 kubelet[2738]: W0130 13:43:05.558146 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.559154 kubelet[2738]: E0130 13:43:05.558154 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.559952 kubelet[2738]: E0130 13:43:05.559211 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.559952 kubelet[2738]: W0130 13:43:05.559223 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.559952 kubelet[2738]: E0130 13:43:05.559233 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.560641 kubelet[2738]: E0130 13:43:05.560099 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.560641 kubelet[2738]: W0130 13:43:05.560111 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.561997 kubelet[2738]: E0130 13:43:05.561792 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.561997 kubelet[2738]: W0130 13:43:05.561803 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.561997 kubelet[2738]: E0130 13:43:05.561814 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.561997 kubelet[2738]: E0130 13:43:05.561832 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.562743 kubelet[2738]: E0130 13:43:05.562726 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.562743 kubelet[2738]: W0130 13:43:05.562741 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.562804 kubelet[2738]: E0130 13:43:05.562756 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.563796 kubelet[2738]: E0130 13:43:05.563783 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.563857 kubelet[2738]: W0130 13:43:05.563846 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.564399 kubelet[2738]: E0130 13:43:05.564387 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.564906 kubelet[2738]: E0130 13:43:05.564885 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.564906 kubelet[2738]: W0130 13:43:05.564904 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.564966 kubelet[2738]: E0130 13:43:05.564917 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.565135 kubelet[2738]: E0130 13:43:05.565112 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.565135 kubelet[2738]: W0130 13:43:05.565127 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.565226 kubelet[2738]: E0130 13:43:05.565168 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.565392 kubelet[2738]: E0130 13:43:05.565344 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.565392 kubelet[2738]: W0130 13:43:05.565354 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.565392 kubelet[2738]: E0130 13:43:05.565361 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.583719 kubelet[2738]: E0130 13:43:05.583693 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:05.584318 containerd[1552]: time="2025-01-30T13:43:05.584185994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6789d6b488-l782j,Uid:5ff6056e-508e-42fc-b6ff-740e335b1cc7,Namespace:calico-system,Attempt:0,}" Jan 30 13:43:05.600478 kubelet[2738]: E0130 13:43:05.600444 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:05.600936 containerd[1552]: time="2025-01-30T13:43:05.600892411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lnp8c,Uid:48012f06-aaf0-470c-bf89-7cc103f69f53,Namespace:calico-system,Attempt:0,}" Jan 30 13:43:05.657734 kubelet[2738]: E0130 13:43:05.657707 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.657734 kubelet[2738]: W0130 13:43:05.657726 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.657734 kubelet[2738]: E0130 13:43:05.657744 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.657995 kubelet[2738]: E0130 13:43:05.657981 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.657995 kubelet[2738]: W0130 13:43:05.657993 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.658051 kubelet[2738]: E0130 13:43:05.658006 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.658262 kubelet[2738]: E0130 13:43:05.658240 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.658262 kubelet[2738]: W0130 13:43:05.658252 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.658337 kubelet[2738]: E0130 13:43:05.658265 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.658512 kubelet[2738]: E0130 13:43:05.658499 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.658512 kubelet[2738]: W0130 13:43:05.658510 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.658593 kubelet[2738]: E0130 13:43:05.658526 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.658768 kubelet[2738]: E0130 13:43:05.658750 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.658768 kubelet[2738]: W0130 13:43:05.658763 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.658853 kubelet[2738]: E0130 13:43:05.658779 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.659025 kubelet[2738]: E0130 13:43:05.659012 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.659025 kubelet[2738]: W0130 13:43:05.659023 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.659117 kubelet[2738]: E0130 13:43:05.659036 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.659265 kubelet[2738]: E0130 13:43:05.659244 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.659265 kubelet[2738]: W0130 13:43:05.659257 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.659345 kubelet[2738]: E0130 13:43:05.659283 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.659465 kubelet[2738]: E0130 13:43:05.659451 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.659465 kubelet[2738]: W0130 13:43:05.659462 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.659540 kubelet[2738]: E0130 13:43:05.659498 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.659698 kubelet[2738]: E0130 13:43:05.659684 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.659698 kubelet[2738]: W0130 13:43:05.659694 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.659766 kubelet[2738]: E0130 13:43:05.659728 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.659910 kubelet[2738]: E0130 13:43:05.659896 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.659950 kubelet[2738]: W0130 13:43:05.659916 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.659950 kubelet[2738]: E0130 13:43:05.659942 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.660125 kubelet[2738]: E0130 13:43:05.660111 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.660125 kubelet[2738]: W0130 13:43:05.660121 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.660269 kubelet[2738]: E0130 13:43:05.660148 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.660336 kubelet[2738]: E0130 13:43:05.660313 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.660336 kubelet[2738]: W0130 13:43:05.660332 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.660401 kubelet[2738]: E0130 13:43:05.660347 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.660571 kubelet[2738]: E0130 13:43:05.660556 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.660571 kubelet[2738]: W0130 13:43:05.660569 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.660647 kubelet[2738]: E0130 13:43:05.660584 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.660782 kubelet[2738]: E0130 13:43:05.660767 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.660782 kubelet[2738]: W0130 13:43:05.660780 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.660876 kubelet[2738]: E0130 13:43:05.660795 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.661092 kubelet[2738]: E0130 13:43:05.661061 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.661092 kubelet[2738]: W0130 13:43:05.661083 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.661164 kubelet[2738]: E0130 13:43:05.661098 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.661300 kubelet[2738]: E0130 13:43:05.661284 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.661300 kubelet[2738]: W0130 13:43:05.661297 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.661376 kubelet[2738]: E0130 13:43:05.661325 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.661515 kubelet[2738]: E0130 13:43:05.661503 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.661515 kubelet[2738]: W0130 13:43:05.661514 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.661593 kubelet[2738]: E0130 13:43:05.661538 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.661710 kubelet[2738]: E0130 13:43:05.661697 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.661710 kubelet[2738]: W0130 13:43:05.661707 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.661776 kubelet[2738]: E0130 13:43:05.661730 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.661918 kubelet[2738]: E0130 13:43:05.661904 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.661918 kubelet[2738]: W0130 13:43:05.661916 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.661986 kubelet[2738]: E0130 13:43:05.661941 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.662143 kubelet[2738]: E0130 13:43:05.662129 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.662143 kubelet[2738]: W0130 13:43:05.662141 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.662218 kubelet[2738]: E0130 13:43:05.662161 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.662384 kubelet[2738]: E0130 13:43:05.662371 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.662384 kubelet[2738]: W0130 13:43:05.662382 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.662457 kubelet[2738]: E0130 13:43:05.662393 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.662659 kubelet[2738]: E0130 13:43:05.662645 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.662659 kubelet[2738]: W0130 13:43:05.662658 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.662752 kubelet[2738]: E0130 13:43:05.662672 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.662878 kubelet[2738]: E0130 13:43:05.662863 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.662878 kubelet[2738]: W0130 13:43:05.662874 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.662954 kubelet[2738]: E0130 13:43:05.662885 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.663117 kubelet[2738]: E0130 13:43:05.663104 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.663117 kubelet[2738]: W0130 13:43:05.663115 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.663208 kubelet[2738]: E0130 13:43:05.663124 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.669198 kubelet[2738]: E0130 13:43:05.669168 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.669198 kubelet[2738]: W0130 13:43:05.669185 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.669198 kubelet[2738]: E0130 13:43:05.669200 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.760133 kubelet[2738]: E0130 13:43:05.760107 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.760133 kubelet[2738]: W0130 13:43:05.760125 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.760133 kubelet[2738]: E0130 13:43:05.760142 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:05.834726 kubelet[2738]: E0130 13:43:05.834622 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:43:05.834726 kubelet[2738]: W0130 13:43:05.834643 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:43:05.834726 kubelet[2738]: E0130 13:43:05.834660 2738 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:43:06.095558 containerd[1552]: time="2025-01-30T13:43:06.095170462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:43:06.095558 containerd[1552]: time="2025-01-30T13:43:06.095224094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:43:06.095558 containerd[1552]: time="2025-01-30T13:43:06.095235836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:06.095558 containerd[1552]: time="2025-01-30T13:43:06.095322088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:06.098005 containerd[1552]: time="2025-01-30T13:43:06.097898378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:43:06.098005 containerd[1552]: time="2025-01-30T13:43:06.097966436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:43:06.098005 containerd[1552]: time="2025-01-30T13:43:06.097980843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:06.098189 containerd[1552]: time="2025-01-30T13:43:06.098088015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:06.134021 containerd[1552]: time="2025-01-30T13:43:06.133975863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lnp8c,Uid:48012f06-aaf0-470c-bf89-7cc103f69f53,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c40d4e7bf0b179d68267e412444a42585324eed07410467d3513c43b2597294\"" Jan 30 13:43:06.134721 kubelet[2738]: E0130 13:43:06.134700 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:06.136894 containerd[1552]: time="2025-01-30T13:43:06.136845765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:43:06.149159 containerd[1552]: time="2025-01-30T13:43:06.149111856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6789d6b488-l782j,Uid:5ff6056e-508e-42fc-b6ff-740e335b1cc7,Namespace:calico-system,Attempt:0,} returns sandbox id \"4316e2b4974e386e956f7f7b03f0a9149122653827c2ed8f6e91af90ff69c9a4\"" Jan 30 13:43:06.149728 kubelet[2738]: E0130 13:43:06.149701 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:07.418808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1853989073.mount: Deactivated successfully. Jan 30 13:43:07.505157 kubelet[2738]: E0130 13:43:07.505108 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9jzb" podUID="72a87a81-6fc8-4427-8a91-308c02047854" Jan 30 13:43:07.593076 containerd[1552]: time="2025-01-30T13:43:07.592996659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:07.605896 containerd[1552]: time="2025-01-30T13:43:07.605844098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:43:07.625287 containerd[1552]: time="2025-01-30T13:43:07.625256412Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:07.648203 containerd[1552]: time="2025-01-30T13:43:07.648152683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:07.648775 containerd[1552]: time="2025-01-30T13:43:07.648749466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.511868946s" Jan 30 13:43:07.648844 containerd[1552]: time="2025-01-30T13:43:07.648779453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:43:07.650048 containerd[1552]: time="2025-01-30T13:43:07.649754939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:43:07.650887 containerd[1552]: time="2025-01-30T13:43:07.650850831Z" level=info msg="CreateContainer within sandbox \"6c40d4e7bf0b179d68267e412444a42585324eed07410467d3513c43b2597294\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:43:07.866259 containerd[1552]: time="2025-01-30T13:43:07.866123910Z" level=info msg="CreateContainer within sandbox \"6c40d4e7bf0b179d68267e412444a42585324eed07410467d3513c43b2597294\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"568b7e0887edb17259dafb3cde2a9f05ccabec252d2e0f76b586b4e1a7873cd3\"" Jan 30 13:43:07.866751 containerd[1552]: time="2025-01-30T13:43:07.866712528Z" level=info msg="StartContainer for \"568b7e0887edb17259dafb3cde2a9f05ccabec252d2e0f76b586b4e1a7873cd3\"" Jan 30 13:43:07.949374 containerd[1552]: time="2025-01-30T13:43:07.949322082Z" level=info msg="StartContainer for \"568b7e0887edb17259dafb3cde2a9f05ccabec252d2e0f76b586b4e1a7873cd3\" returns successfully" Jan 30 13:43:07.970150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-568b7e0887edb17259dafb3cde2a9f05ccabec252d2e0f76b586b4e1a7873cd3-rootfs.mount: Deactivated successfully. Jan 30 13:43:08.116883 containerd[1552]: time="2025-01-30T13:43:08.116745918Z" level=info msg="shim disconnected" id=568b7e0887edb17259dafb3cde2a9f05ccabec252d2e0f76b586b4e1a7873cd3 namespace=k8s.io Jan 30 13:43:08.116883 containerd[1552]: time="2025-01-30T13:43:08.116799388Z" level=warning msg="cleaning up after shim disconnected" id=568b7e0887edb17259dafb3cde2a9f05ccabec252d2e0f76b586b4e1a7873cd3 namespace=k8s.io Jan 30 13:43:08.116883 containerd[1552]: time="2025-01-30T13:43:08.116810880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:43:08.567718 kubelet[2738]: E0130 13:43:08.567676 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:09.505821 kubelet[2738]: E0130 13:43:09.505754 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9jzb" podUID="72a87a81-6fc8-4427-8a91-308c02047854" Jan 30 13:43:09.762824 containerd[1552]: time="2025-01-30T13:43:09.762670373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:09.764198 containerd[1552]: time="2025-01-30T13:43:09.764147581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 13:43:09.765702 containerd[1552]: time="2025-01-30T13:43:09.765655578Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:09.767919 containerd[1552]: time="2025-01-30T13:43:09.767880013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:09.768695 containerd[1552]: time="2025-01-30T13:43:09.768469653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.118674889s" Jan 30 13:43:09.768695 containerd[1552]: time="2025-01-30T13:43:09.768515218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:43:09.770769 containerd[1552]: time="2025-01-30T13:43:09.770052951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:43:09.780089 containerd[1552]: time="2025-01-30T13:43:09.777896406Z" level=info msg="CreateContainer within sandbox \"4316e2b4974e386e956f7f7b03f0a9149122653827c2ed8f6e91af90ff69c9a4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:43:09.795142 containerd[1552]: time="2025-01-30T13:43:09.795076476Z" level=info msg="CreateContainer within sandbox \"4316e2b4974e386e956f7f7b03f0a9149122653827c2ed8f6e91af90ff69c9a4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"405a6583ea9badc7bb4f76fe2277876c39a2acedbbe1ae763ddc5d645ed5fefa\"" Jan 30 13:43:09.795719 containerd[1552]: time="2025-01-30T13:43:09.795687386Z" level=info msg="StartContainer for \"405a6583ea9badc7bb4f76fe2277876c39a2acedbbe1ae763ddc5d645ed5fefa\"" Jan 30 13:43:09.862766 containerd[1552]: time="2025-01-30T13:43:09.862725508Z" level=info msg="StartContainer for \"405a6583ea9badc7bb4f76fe2277876c39a2acedbbe1ae763ddc5d645ed5fefa\" returns successfully" Jan 30 13:43:10.571798 kubelet[2738]: E0130 13:43:10.571765 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:10.704698 kubelet[2738]: I0130 13:43:10.704643 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6789d6b488-l782j" podStartSLOduration=2.084880542 podStartE2EDuration="5.704626778s" podCreationTimestamp="2025-01-30 13:43:05 +0000 UTC" firstStartedPulling="2025-01-30 13:43:06.150182601 +0000 UTC m=+23.715934673" lastFinishedPulling="2025-01-30 13:43:09.769928837 +0000 UTC m=+27.335680909" observedRunningTime="2025-01-30 13:43:10.703832695 +0000 UTC m=+28.269584767" watchObservedRunningTime="2025-01-30 13:43:10.704626778 +0000 UTC m=+28.270378850" Jan 30 13:43:11.505332 kubelet[2738]: E0130 13:43:11.505261 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9jzb" podUID="72a87a81-6fc8-4427-8a91-308c02047854" Jan 30 13:43:11.573286 kubelet[2738]: I0130 13:43:11.573254 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:43:11.573974 kubelet[2738]: E0130 13:43:11.573941 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:12.611013 kubelet[2738]: I0130 13:43:12.610981 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:43:12.611895 kubelet[2738]: E0130 13:43:12.611846 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:13.505512 kubelet[2738]: E0130 13:43:13.505430 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9jzb" podUID="72a87a81-6fc8-4427-8a91-308c02047854" Jan 30 13:43:13.577576 kubelet[2738]: E0130 13:43:13.577537 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:14.027426 containerd[1552]: time="2025-01-30T13:43:14.027375060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:14.028689 containerd[1552]: time="2025-01-30T13:43:14.028634207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:43:14.031335 containerd[1552]: time="2025-01-30T13:43:14.030548114Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:14.035211 containerd[1552]: time="2025-01-30T13:43:14.035146998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:14.035939 containerd[1552]: time="2025-01-30T13:43:14.035908419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.265826003s" Jan 30 13:43:14.035987 containerd[1552]: time="2025-01-30T13:43:14.035966258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:43:14.038998 containerd[1552]: time="2025-01-30T13:43:14.038962800Z" level=info msg="CreateContainer within sandbox \"6c40d4e7bf0b179d68267e412444a42585324eed07410467d3513c43b2597294\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:43:14.065376 containerd[1552]: time="2025-01-30T13:43:14.065324700Z" level=info msg="CreateContainer within sandbox \"6c40d4e7bf0b179d68267e412444a42585324eed07410467d3513c43b2597294\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2b76773be52ebbd9db7ad253a65ee2566ce75f8d1c860f0564fc486962a4f9bb\"" Jan 30 13:43:14.065943 containerd[1552]: time="2025-01-30T13:43:14.065847553Z" level=info msg="StartContainer for \"2b76773be52ebbd9db7ad253a65ee2566ce75f8d1c860f0564fc486962a4f9bb\"" Jan 30 13:43:14.265518 containerd[1552]: time="2025-01-30T13:43:14.265444986Z" level=info msg="StartContainer for \"2b76773be52ebbd9db7ad253a65ee2566ce75f8d1c860f0564fc486962a4f9bb\" returns successfully" Jan 30 13:43:14.290279 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:50736.service - OpenSSH per-connection server daemon (10.0.0.1:50736). Jan 30 13:43:14.331684 sshd[3436]: Accepted publickey for core from 10.0.0.1 port 50736 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:14.336278 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:14.344628 systemd-logind[1532]: New session 8 of user core. Jan 30 13:43:14.353017 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:43:14.580941 kubelet[2738]: E0130 13:43:14.580814 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:14.862935 sshd[3436]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:14.868250 systemd-logind[1532]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:43:14.868505 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:50736.service: Deactivated successfully. Jan 30 13:43:14.871270 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:43:14.872479 systemd-logind[1532]: Removed session 8. Jan 30 13:43:15.505644 kubelet[2738]: E0130 13:43:15.505585 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9jzb" podUID="72a87a81-6fc8-4427-8a91-308c02047854" Jan 30 13:43:15.581994 kubelet[2738]: E0130 13:43:15.581958 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:16.359646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b76773be52ebbd9db7ad253a65ee2566ce75f8d1c860f0564fc486962a4f9bb-rootfs.mount: Deactivated successfully. Jan 30 13:43:16.403784 kubelet[2738]: I0130 13:43:16.390150 2738 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:43:16.771510 containerd[1552]: time="2025-01-30T13:43:16.771419284Z" level=info msg="shim disconnected" id=2b76773be52ebbd9db7ad253a65ee2566ce75f8d1c860f0564fc486962a4f9bb namespace=k8s.io Jan 30 13:43:16.771510 containerd[1552]: time="2025-01-30T13:43:16.771479878Z" level=warning msg="cleaning up after shim disconnected" id=2b76773be52ebbd9db7ad253a65ee2566ce75f8d1c860f0564fc486962a4f9bb namespace=k8s.io Jan 30 13:43:16.771510 containerd[1552]: time="2025-01-30T13:43:16.771510146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:43:16.934595 kubelet[2738]: I0130 13:43:16.934438 2738 topology_manager.go:215] "Topology Admit Handler" podUID="e194480b-6ba7-4dbb-b599-88b607e62d55" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tlsqp" Jan 30 13:43:17.070132 kubelet[2738]: I0130 13:43:17.069668 2738 topology_manager.go:215] "Topology Admit Handler" podUID="176ea8e5-e830-4bc9-bb74-b55fbc8b7c09" podNamespace="calico-system" podName="calico-kube-controllers-55b68d756c-dkqrp" Jan 30 13:43:17.070895 kubelet[2738]: I0130 13:43:17.070800 2738 topology_manager.go:215] "Topology Admit Handler" podUID="1b15d90a-b342-46ab-afb0-2baf9fef6c45" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kbbt2" Jan 30 13:43:17.071132 kubelet[2738]: I0130 13:43:17.071080 2738 topology_manager.go:215] "Topology Admit Handler" podUID="7f4f520e-64e4-4078-a2f8-c4c75525a5da" podNamespace="calico-apiserver" podName="calico-apiserver-7db98ddb54-jfc2r" Jan 30 13:43:17.073043 kubelet[2738]: I0130 13:43:17.072040 2738 topology_manager.go:215] "Topology Admit Handler" podUID="7ed2fe40-9b9d-4c9d-9105-b15821825523" podNamespace="calico-apiserver" podName="calico-apiserver-7db98ddb54-mkc94" Jan 30 13:43:17.135220 kubelet[2738]: I0130 13:43:17.135153 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xthsn\" (UniqueName: \"kubernetes.io/projected/e194480b-6ba7-4dbb-b599-88b607e62d55-kube-api-access-xthsn\") pod \"coredns-7db6d8ff4d-tlsqp\" (UID: \"e194480b-6ba7-4dbb-b599-88b607e62d55\") " pod="kube-system/coredns-7db6d8ff4d-tlsqp" Jan 30 13:43:17.135220 kubelet[2738]: I0130 13:43:17.135209 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e194480b-6ba7-4dbb-b599-88b607e62d55-config-volume\") pod \"coredns-7db6d8ff4d-tlsqp\" (UID: \"e194480b-6ba7-4dbb-b599-88b607e62d55\") " pod="kube-system/coredns-7db6d8ff4d-tlsqp" Jan 30 13:43:17.236288 kubelet[2738]: I0130 13:43:17.236230 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsqtd\" (UniqueName: \"kubernetes.io/projected/7f4f520e-64e4-4078-a2f8-c4c75525a5da-kube-api-access-bsqtd\") pod \"calico-apiserver-7db98ddb54-jfc2r\" (UID: \"7f4f520e-64e4-4078-a2f8-c4c75525a5da\") " pod="calico-apiserver/calico-apiserver-7db98ddb54-jfc2r" Jan 30 13:43:17.236288 kubelet[2738]: I0130 13:43:17.236277 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfvk\" (UniqueName: \"kubernetes.io/projected/176ea8e5-e830-4bc9-bb74-b55fbc8b7c09-kube-api-access-sqfvk\") pod \"calico-kube-controllers-55b68d756c-dkqrp\" (UID: \"176ea8e5-e830-4bc9-bb74-b55fbc8b7c09\") " pod="calico-system/calico-kube-controllers-55b68d756c-dkqrp" Jan 30 13:43:17.236288 kubelet[2738]: I0130 13:43:17.236308 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/176ea8e5-e830-4bc9-bb74-b55fbc8b7c09-tigera-ca-bundle\") pod \"calico-kube-controllers-55b68d756c-dkqrp\" (UID: \"176ea8e5-e830-4bc9-bb74-b55fbc8b7c09\") " pod="calico-system/calico-kube-controllers-55b68d756c-dkqrp" Jan 30 13:43:17.236524 kubelet[2738]: I0130 13:43:17.236331 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b15d90a-b342-46ab-afb0-2baf9fef6c45-config-volume\") pod \"coredns-7db6d8ff4d-kbbt2\" (UID: \"1b15d90a-b342-46ab-afb0-2baf9fef6c45\") " pod="kube-system/coredns-7db6d8ff4d-kbbt2" Jan 30 13:43:17.236524 kubelet[2738]: I0130 13:43:17.236358 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp7ss\" (UniqueName: \"kubernetes.io/projected/1b15d90a-b342-46ab-afb0-2baf9fef6c45-kube-api-access-mp7ss\") pod \"coredns-7db6d8ff4d-kbbt2\" (UID: \"1b15d90a-b342-46ab-afb0-2baf9fef6c45\") " pod="kube-system/coredns-7db6d8ff4d-kbbt2" Jan 30 13:43:17.236524 kubelet[2738]: I0130 13:43:17.236398 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7ed2fe40-9b9d-4c9d-9105-b15821825523-calico-apiserver-certs\") pod \"calico-apiserver-7db98ddb54-mkc94\" (UID: \"7ed2fe40-9b9d-4c9d-9105-b15821825523\") " pod="calico-apiserver/calico-apiserver-7db98ddb54-mkc94" Jan 30 13:43:17.236524 kubelet[2738]: I0130 13:43:17.236426 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jtpv\" (UniqueName: \"kubernetes.io/projected/7ed2fe40-9b9d-4c9d-9105-b15821825523-kube-api-access-2jtpv\") pod \"calico-apiserver-7db98ddb54-mkc94\" (UID: \"7ed2fe40-9b9d-4c9d-9105-b15821825523\") " pod="calico-apiserver/calico-apiserver-7db98ddb54-mkc94" Jan 30 13:43:17.236524 kubelet[2738]: I0130 13:43:17.236506 2738 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7f4f520e-64e4-4078-a2f8-c4c75525a5da-calico-apiserver-certs\") pod \"calico-apiserver-7db98ddb54-jfc2r\" (UID: \"7f4f520e-64e4-4078-a2f8-c4c75525a5da\") " pod="calico-apiserver/calico-apiserver-7db98ddb54-jfc2r" Jan 30 13:43:17.509110 containerd[1552]: time="2025-01-30T13:43:17.509062370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9jzb,Uid:72a87a81-6fc8-4427-8a91-308c02047854,Namespace:calico-system,Attempt:0,}" Jan 30 13:43:17.586731 kubelet[2738]: E0130 13:43:17.586706 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:17.587379 containerd[1552]: time="2025-01-30T13:43:17.587348449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:43:17.676092 containerd[1552]: time="2025-01-30T13:43:17.676047161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b68d756c-dkqrp,Uid:176ea8e5-e830-4bc9-bb74-b55fbc8b7c09,Namespace:calico-system,Attempt:0,}" Jan 30 13:43:17.678453 containerd[1552]: time="2025-01-30T13:43:17.678427493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db98ddb54-jfc2r,Uid:7f4f520e-64e4-4078-a2f8-c4c75525a5da,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:43:17.678651 kubelet[2738]: E0130 13:43:17.678614 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:17.679099 containerd[1552]: time="2025-01-30T13:43:17.679066584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kbbt2,Uid:1b15d90a-b342-46ab-afb0-2baf9fef6c45,Namespace:kube-system,Attempt:0,}" Jan 30 13:43:17.680392 containerd[1552]: time="2025-01-30T13:43:17.680368250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db98ddb54-mkc94,Uid:7ed2fe40-9b9d-4c9d-9105-b15821825523,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:43:17.838158 kubelet[2738]: E0130 13:43:17.838028 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:17.838846 containerd[1552]: time="2025-01-30T13:43:17.838516227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tlsqp,Uid:e194480b-6ba7-4dbb-b599-88b607e62d55,Namespace:kube-system,Attempt:0,}" Jan 30 13:43:18.116715 containerd[1552]: time="2025-01-30T13:43:18.115801460Z" level=error msg="Failed to destroy network for sandbox \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.116715 containerd[1552]: time="2025-01-30T13:43:18.116193015Z" level=error msg="encountered an error cleaning up failed sandbox \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.116715 containerd[1552]: time="2025-01-30T13:43:18.116235124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kbbt2,Uid:1b15d90a-b342-46ab-afb0-2baf9fef6c45,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.121040 containerd[1552]: time="2025-01-30T13:43:18.120898906Z" level=error msg="Failed to destroy network for sandbox \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.121416 containerd[1552]: time="2025-01-30T13:43:18.121388265Z" level=error msg="encountered an error cleaning up failed sandbox \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.121570 containerd[1552]: time="2025-01-30T13:43:18.121509293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9jzb,Uid:72a87a81-6fc8-4427-8a91-308c02047854,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.128979 containerd[1552]: time="2025-01-30T13:43:18.128808766Z" level=error msg="Failed to destroy network for sandbox \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.129477 containerd[1552]: time="2025-01-30T13:43:18.129448097Z" level=error msg="encountered an error cleaning up failed sandbox \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.129621 containerd[1552]: time="2025-01-30T13:43:18.129593139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tlsqp,Uid:e194480b-6ba7-4dbb-b599-88b607e62d55,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.132248 containerd[1552]: time="2025-01-30T13:43:18.132207060Z" level=error msg="Failed to destroy network for sandbox \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.132587 containerd[1552]: time="2025-01-30T13:43:18.132552719Z" level=error msg="Failed to destroy network for sandbox \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.132845 containerd[1552]: time="2025-01-30T13:43:18.132816976Z" level=error msg="encountered an error cleaning up failed sandbox \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.132959 containerd[1552]: time="2025-01-30T13:43:18.132931370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db98ddb54-jfc2r,Uid:7f4f520e-64e4-4078-a2f8-c4c75525a5da,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.133117 containerd[1552]: time="2025-01-30T13:43:18.132958010Z" level=error msg="encountered an error cleaning up failed sandbox \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.133231 containerd[1552]: time="2025-01-30T13:43:18.133189976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b68d756c-dkqrp,Uid:176ea8e5-e830-4bc9-bb74-b55fbc8b7c09,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.135755 kubelet[2738]: E0130 13:43:18.135684 2738 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.136158 kubelet[2738]: E0130 13:43:18.135752 2738 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.136158 kubelet[2738]: E0130 13:43:18.135782 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kbbt2" Jan 30 13:43:18.136158 kubelet[2738]: E0130 13:43:18.135809 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9jzb" Jan 30 13:43:18.136158 kubelet[2738]: E0130 13:43:18.135815 2738 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kbbt2" Jan 30 13:43:18.136268 kubelet[2738]: E0130 13:43:18.135830 2738 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9jzb" Jan 30 13:43:18.136268 kubelet[2738]: E0130 13:43:18.135866 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kbbt2_kube-system(1b15d90a-b342-46ab-afb0-2baf9fef6c45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kbbt2_kube-system(1b15d90a-b342-46ab-afb0-2baf9fef6c45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kbbt2" podUID="1b15d90a-b342-46ab-afb0-2baf9fef6c45" Jan 30 13:43:18.136268 kubelet[2738]: E0130 13:43:18.135866 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w9jzb_calico-system(72a87a81-6fc8-4427-8a91-308c02047854)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w9jzb_calico-system(72a87a81-6fc8-4427-8a91-308c02047854)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9jzb" podUID="72a87a81-6fc8-4427-8a91-308c02047854" Jan 30 13:43:18.136381 kubelet[2738]: E0130 13:43:18.135684 2738 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.136381 kubelet[2738]: E0130 13:43:18.135910 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b68d756c-dkqrp" Jan 30 13:43:18.136381 kubelet[2738]: E0130 13:43:18.135923 2738 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b68d756c-dkqrp" Jan 30 13:43:18.136458 kubelet[2738]: E0130 13:43:18.135942 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55b68d756c-dkqrp_calico-system(176ea8e5-e830-4bc9-bb74-b55fbc8b7c09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55b68d756c-dkqrp_calico-system(176ea8e5-e830-4bc9-bb74-b55fbc8b7c09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55b68d756c-dkqrp" podUID="176ea8e5-e830-4bc9-bb74-b55fbc8b7c09" Jan 30 13:43:18.136458 kubelet[2738]: E0130 13:43:18.135735 2738 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.136458 kubelet[2738]: E0130 13:43:18.135965 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db98ddb54-jfc2r" Jan 30 13:43:18.136566 kubelet[2738]: E0130 13:43:18.135978 2738 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db98ddb54-jfc2r" Jan 30 13:43:18.136566 kubelet[2738]: E0130 13:43:18.135997 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db98ddb54-jfc2r_calico-apiserver(7f4f520e-64e4-4078-a2f8-c4c75525a5da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db98ddb54-jfc2r_calico-apiserver(7f4f520e-64e4-4078-a2f8-c4c75525a5da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db98ddb54-jfc2r" podUID="7f4f520e-64e4-4078-a2f8-c4c75525a5da" Jan 30 13:43:18.136566 kubelet[2738]: E0130 13:43:18.135690 2738 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.136656 kubelet[2738]: E0130 13:43:18.136104 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tlsqp" Jan 30 13:43:18.136656 kubelet[2738]: E0130 13:43:18.136123 2738 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tlsqp" Jan 30 13:43:18.136656 kubelet[2738]: E0130 13:43:18.136156 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tlsqp_kube-system(e194480b-6ba7-4dbb-b599-88b607e62d55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tlsqp_kube-system(e194480b-6ba7-4dbb-b599-88b607e62d55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tlsqp" podUID="e194480b-6ba7-4dbb-b599-88b607e62d55" Jan 30 13:43:18.141668 containerd[1552]: time="2025-01-30T13:43:18.141624322Z" level=error msg="Failed to destroy network for sandbox \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.142090 containerd[1552]: time="2025-01-30T13:43:18.142054309Z" level=error msg="encountered an error cleaning up failed sandbox \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.142221 containerd[1552]: time="2025-01-30T13:43:18.142112598Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db98ddb54-mkc94,Uid:7ed2fe40-9b9d-4c9d-9105-b15821825523,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.142335 kubelet[2738]: E0130 13:43:18.142302 2738 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.142381 kubelet[2738]: E0130 13:43:18.142351 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db98ddb54-mkc94" Jan 30 13:43:18.142381 kubelet[2738]: E0130 13:43:18.142371 2738 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db98ddb54-mkc94" Jan 30 13:43:18.142444 kubelet[2738]: E0130 13:43:18.142416 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db98ddb54-mkc94_calico-apiserver(7ed2fe40-9b9d-4c9d-9105-b15821825523)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db98ddb54-mkc94_calico-apiserver(7ed2fe40-9b9d-4c9d-9105-b15821825523)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db98ddb54-mkc94" podUID="7ed2fe40-9b9d-4c9d-9105-b15821825523" Jan 30 13:43:18.589065 kubelet[2738]: I0130 13:43:18.589035 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:18.589974 kubelet[2738]: I0130 13:43:18.589942 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:18.593507 kubelet[2738]: I0130 13:43:18.592692 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:18.593597 containerd[1552]: time="2025-01-30T13:43:18.592820353Z" level=info msg="StopPodSandbox for \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\"" Jan 30 13:43:18.593851 containerd[1552]: time="2025-01-30T13:43:18.593762204Z" level=info msg="StopPodSandbox for \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\"" Jan 30 13:43:18.594666 kubelet[2738]: I0130 13:43:18.594644 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:18.595522 containerd[1552]: time="2025-01-30T13:43:18.595320631Z" level=info msg="StopPodSandbox for \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\"" Jan 30 13:43:18.595914 containerd[1552]: time="2025-01-30T13:43:18.595880483Z" level=info msg="StopPodSandbox for \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\"" Jan 30 13:43:18.601376 containerd[1552]: time="2025-01-30T13:43:18.601087906Z" level=info msg="Ensure that sandbox 675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f in task-service has been cleanup successfully" Jan 30 13:43:18.601376 containerd[1552]: time="2025-01-30T13:43:18.601086543Z" level=info msg="Ensure that sandbox 39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7 in task-service has been cleanup successfully" Jan 30 13:43:18.601560 containerd[1552]: time="2025-01-30T13:43:18.601086674Z" level=info msg="Ensure that sandbox 81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3 in task-service has been cleanup successfully" Jan 30 13:43:18.610558 containerd[1552]: time="2025-01-30T13:43:18.610481973Z" level=info msg="Ensure that sandbox 4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70 in task-service has been cleanup successfully" Jan 30 13:43:18.618230 kubelet[2738]: I0130 13:43:18.618193 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:18.620421 containerd[1552]: time="2025-01-30T13:43:18.620372354Z" level=info msg="StopPodSandbox for \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\"" Jan 30 13:43:18.620625 containerd[1552]: time="2025-01-30T13:43:18.620604490Z" level=info msg="Ensure that sandbox f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82 in task-service has been cleanup successfully" Jan 30 13:43:18.622630 kubelet[2738]: I0130 13:43:18.622602 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:18.623182 containerd[1552]: time="2025-01-30T13:43:18.623132059Z" level=info msg="StopPodSandbox for \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\"" Jan 30 13:43:18.623350 containerd[1552]: time="2025-01-30T13:43:18.623309252Z" level=info msg="Ensure that sandbox a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2 in task-service has been cleanup successfully" Jan 30 13:43:18.655159 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f-shm.mount: Deactivated successfully. Jan 30 13:43:18.655397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82-shm.mount: Deactivated successfully. Jan 30 13:43:18.682351 containerd[1552]: time="2025-01-30T13:43:18.682209814Z" level=error msg="StopPodSandbox for \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\" failed" error="failed to destroy network for sandbox \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.682612 kubelet[2738]: E0130 13:43:18.682525 2738 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:18.682714 kubelet[2738]: E0130 13:43:18.682601 2738 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3"} Jan 30 13:43:18.682714 kubelet[2738]: E0130 13:43:18.682683 2738 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"176ea8e5-e830-4bc9-bb74-b55fbc8b7c09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:43:18.682714 kubelet[2738]: E0130 13:43:18.682706 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"176ea8e5-e830-4bc9-bb74-b55fbc8b7c09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55b68d756c-dkqrp" podUID="176ea8e5-e830-4bc9-bb74-b55fbc8b7c09" Jan 30 13:43:18.683171 containerd[1552]: time="2025-01-30T13:43:18.683140042Z" level=error msg="StopPodSandbox for \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\" failed" error="failed to destroy network for sandbox \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.683342 containerd[1552]: time="2025-01-30T13:43:18.683285856Z" level=error msg="StopPodSandbox for \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\" failed" error="failed to destroy network for sandbox \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.684233 kubelet[2738]: E0130 13:43:18.684208 2738 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:18.684287 kubelet[2738]: E0130 13:43:18.684247 2738 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f"} Jan 30 13:43:18.684287 kubelet[2738]: E0130 13:43:18.684271 2738 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f4f520e-64e4-4078-a2f8-c4c75525a5da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:43:18.684381 kubelet[2738]: E0130 13:43:18.684289 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f4f520e-64e4-4078-a2f8-c4c75525a5da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db98ddb54-jfc2r" podUID="7f4f520e-64e4-4078-a2f8-c4c75525a5da" Jan 30 13:43:18.684381 kubelet[2738]: E0130 13:43:18.684207 2738 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:18.684381 kubelet[2738]: E0130 13:43:18.684309 2738 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7"} Jan 30 13:43:18.684381 kubelet[2738]: E0130 13:43:18.684331 2738 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e194480b-6ba7-4dbb-b599-88b607e62d55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:43:18.684773 kubelet[2738]: E0130 13:43:18.684346 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e194480b-6ba7-4dbb-b599-88b607e62d55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tlsqp" podUID="e194480b-6ba7-4dbb-b599-88b607e62d55" Jan 30 13:43:18.684773 kubelet[2738]: E0130 13:43:18.684542 2738 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:18.684773 kubelet[2738]: E0130 13:43:18.684571 2738 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70"} Jan 30 13:43:18.684773 kubelet[2738]: E0130 13:43:18.684600 2738 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ed2fe40-9b9d-4c9d-9105-b15821825523\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:43:18.684959 containerd[1552]: time="2025-01-30T13:43:18.684390080Z" level=error msg="StopPodSandbox for \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\" failed" error="failed to destroy network for sandbox \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.684994 kubelet[2738]: E0130 13:43:18.684622 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ed2fe40-9b9d-4c9d-9105-b15821825523\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db98ddb54-mkc94" podUID="7ed2fe40-9b9d-4c9d-9105-b15821825523" Jan 30 13:43:18.685857 containerd[1552]: time="2025-01-30T13:43:18.685808475Z" level=error msg="StopPodSandbox for \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\" failed" error="failed to destroy network for sandbox \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.685955 kubelet[2738]: E0130 13:43:18.685929 2738 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:18.686001 kubelet[2738]: E0130 13:43:18.685962 2738 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82"} Jan 30 13:43:18.686001 kubelet[2738]: E0130 13:43:18.685989 2738 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72a87a81-6fc8-4427-8a91-308c02047854\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:43:18.686095 kubelet[2738]: E0130 13:43:18.686014 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72a87a81-6fc8-4427-8a91-308c02047854\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9jzb" podUID="72a87a81-6fc8-4427-8a91-308c02047854" Jan 30 13:43:18.688172 containerd[1552]: time="2025-01-30T13:43:18.688034366Z" level=error msg="StopPodSandbox for \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\" failed" error="failed to destroy network for sandbox \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:43:18.688223 kubelet[2738]: E0130 13:43:18.688156 2738 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:18.688223 kubelet[2738]: E0130 13:43:18.688184 2738 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2"} Jan 30 13:43:18.688223 kubelet[2738]: E0130 13:43:18.688208 2738 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b15d90a-b342-46ab-afb0-2baf9fef6c45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:43:18.688316 kubelet[2738]: E0130 13:43:18.688228 2738 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b15d90a-b342-46ab-afb0-2baf9fef6c45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kbbt2" podUID="1b15d90a-b342-46ab-afb0-2baf9fef6c45" Jan 30 13:43:19.885776 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:48516.service - OpenSSH per-connection server daemon (10.0.0.1:48516). Jan 30 13:43:20.468296 sshd[3856]: Accepted publickey for core from 10.0.0.1 port 48516 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:20.470619 sshd[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:20.475574 systemd-logind[1532]: New session 9 of user core. Jan 30 13:43:20.482005 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:43:20.615019 sshd[3856]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:20.618277 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:48516.service: Deactivated successfully. Jan 30 13:43:20.621525 systemd-logind[1532]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:43:20.621939 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:43:20.623201 systemd-logind[1532]: Removed session 9. Jan 30 13:43:22.814998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917321948.mount: Deactivated successfully. Jan 30 13:43:23.028128 containerd[1552]: time="2025-01-30T13:43:23.028074485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:23.028816 containerd[1552]: time="2025-01-30T13:43:23.028778678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:43:23.029952 containerd[1552]: time="2025-01-30T13:43:23.029920642Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:23.031883 containerd[1552]: time="2025-01-30T13:43:23.031840968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:23.032416 containerd[1552]: time="2025-01-30T13:43:23.032378317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.444991827s" Jan 30 13:43:23.032416 containerd[1552]: time="2025-01-30T13:43:23.032408353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:43:23.041055 containerd[1552]: time="2025-01-30T13:43:23.041010115Z" level=info msg="CreateContainer within sandbox \"6c40d4e7bf0b179d68267e412444a42585324eed07410467d3513c43b2597294\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:43:23.080656 containerd[1552]: time="2025-01-30T13:43:23.080512360Z" level=info msg="CreateContainer within sandbox \"6c40d4e7bf0b179d68267e412444a42585324eed07410467d3513c43b2597294\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"25e02746c8c9dd41b6e32bb2f339232d5603f96aea60cd8e840b7b73f1b17870\"" Jan 30 13:43:23.081220 containerd[1552]: time="2025-01-30T13:43:23.081186135Z" level=info msg="StartContainer for \"25e02746c8c9dd41b6e32bb2f339232d5603f96aea60cd8e840b7b73f1b17870\"" Jan 30 13:43:23.472869 containerd[1552]: time="2025-01-30T13:43:23.472793628Z" level=info msg="StartContainer for \"25e02746c8c9dd41b6e32bb2f339232d5603f96aea60cd8e840b7b73f1b17870\" returns successfully" Jan 30 13:43:23.503819 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:43:23.503946 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:43:23.633132 kubelet[2738]: E0130 13:43:23.633098 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:24.635338 kubelet[2738]: E0130 13:43:24.635294 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:24.877521 kernel: bpftool[4092]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:43:25.124317 systemd-networkd[1240]: vxlan.calico: Link UP Jan 30 13:43:25.124327 systemd-networkd[1240]: vxlan.calico: Gained carrier Jan 30 13:43:25.625814 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:48524.service - OpenSSH per-connection server daemon (10.0.0.1:48524). Jan 30 13:43:25.658628 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 48524 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:25.660188 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:25.663953 systemd-logind[1532]: New session 10 of user core. Jan 30 13:43:25.670743 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:43:25.833860 sshd[4187]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:25.838184 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:48524.service: Deactivated successfully. Jan 30 13:43:25.841296 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:43:25.841407 systemd-logind[1532]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:43:25.842655 systemd-logind[1532]: Removed session 10. Jan 30 13:43:26.895720 systemd-networkd[1240]: vxlan.calico: Gained IPv6LL Jan 30 13:43:29.505545 containerd[1552]: time="2025-01-30T13:43:29.505476568Z" level=info msg="StopPodSandbox for \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\"" Jan 30 13:43:29.763030 kubelet[2738]: I0130 13:43:29.762884 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lnp8c" podStartSLOduration=7.866469644 podStartE2EDuration="24.762864097s" podCreationTimestamp="2025-01-30 13:43:05 +0000 UTC" firstStartedPulling="2025-01-30 13:43:06.136590043 +0000 UTC m=+23.702342115" lastFinishedPulling="2025-01-30 13:43:23.032984495 +0000 UTC m=+40.598736568" observedRunningTime="2025-01-30 13:43:23.646776771 +0000 UTC m=+41.212528863" watchObservedRunningTime="2025-01-30 13:43:29.762864097 +0000 UTC m=+47.328616169" Jan 30 13:43:29.829562 containerd[1552]: 2025-01-30 13:43:29.762 [INFO][4225] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:29.829562 containerd[1552]: 2025-01-30 13:43:29.762 [INFO][4225] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" iface="eth0" netns="/var/run/netns/cni-632d8262-c095-94ea-bd82-cd451a59e6fa" Jan 30 13:43:29.829562 containerd[1552]: 2025-01-30 13:43:29.763 [INFO][4225] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" iface="eth0" netns="/var/run/netns/cni-632d8262-c095-94ea-bd82-cd451a59e6fa" Jan 30 13:43:29.829562 containerd[1552]: 2025-01-30 13:43:29.764 [INFO][4225] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" iface="eth0" netns="/var/run/netns/cni-632d8262-c095-94ea-bd82-cd451a59e6fa" Jan 30 13:43:29.829562 containerd[1552]: 2025-01-30 13:43:29.764 [INFO][4225] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:29.829562 containerd[1552]: 2025-01-30 13:43:29.764 [INFO][4225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:29.829562 containerd[1552]: 2025-01-30 13:43:29.814 [INFO][4232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" HandleID="k8s-pod-network.675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:29.829562 containerd[1552]: 2025-01-30 13:43:29.815 [INFO][4232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:29.829562 containerd[1552]: 2025-01-30 13:43:29.815 [INFO][4232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:29.829562 containerd[1552]: 2025-01-30 13:43:29.821 [WARNING][4232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" HandleID="k8s-pod-network.675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:29.829562 containerd[1552]: 2025-01-30 13:43:29.821 [INFO][4232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" HandleID="k8s-pod-network.675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:29.829562 containerd[1552]: 2025-01-30 13:43:29.822 [INFO][4232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:29.829562 containerd[1552]: 2025-01-30 13:43:29.825 [INFO][4225] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:29.830037 containerd[1552]: time="2025-01-30T13:43:29.829695748Z" level=info msg="TearDown network for sandbox \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\" successfully" Jan 30 13:43:29.830037 containerd[1552]: time="2025-01-30T13:43:29.829722318Z" level=info msg="StopPodSandbox for \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\" returns successfully" Jan 30 13:43:29.831659 containerd[1552]: time="2025-01-30T13:43:29.831582880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db98ddb54-jfc2r,Uid:7f4f520e-64e4-4078-a2f8-c4c75525a5da,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:43:29.833345 systemd[1]: run-netns-cni\x2d632d8262\x2dc095\x2d94ea\x2dbd82\x2dcd451a59e6fa.mount: Deactivated successfully. Jan 30 13:43:29.936977 systemd-networkd[1240]: cali25a42e66813: Link UP Jan 30 13:43:29.937582 systemd-networkd[1240]: cali25a42e66813: Gained carrier Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.878 [INFO][4240] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0 calico-apiserver-7db98ddb54- calico-apiserver 7f4f520e-64e4-4078-a2f8-c4c75525a5da 845 0 2025-01-30 13:43:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7db98ddb54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7db98ddb54-jfc2r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali25a42e66813 [] []}} ContainerID="d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-jfc2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-" Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.878 [INFO][4240] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-jfc2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.907 [INFO][4253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" HandleID="k8s-pod-network.d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.913 [INFO][4253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" HandleID="k8s-pod-network.d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002deeb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7db98ddb54-jfc2r", "timestamp":"2025-01-30 13:43:29.907086838 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.913 [INFO][4253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.913 [INFO][4253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.913 [INFO][4253] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.914 [INFO][4253] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" host="localhost" Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.918 [INFO][4253] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.921 [INFO][4253] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.922 [INFO][4253] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.923 [INFO][4253] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.923 [INFO][4253] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" host="localhost" Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.924 [INFO][4253] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.927 [INFO][4253] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" host="localhost" Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.931 [INFO][4253] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" host="localhost" Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.931 [INFO][4253] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" host="localhost" Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.931 [INFO][4253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:29.949783 containerd[1552]: 2025-01-30 13:43:29.931 [INFO][4253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" HandleID="k8s-pod-network.d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:29.950292 containerd[1552]: 2025-01-30 13:43:29.935 [INFO][4240] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-jfc2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0", GenerateName:"calico-apiserver-7db98ddb54-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f4f520e-64e4-4078-a2f8-c4c75525a5da", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db98ddb54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7db98ddb54-jfc2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25a42e66813", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:29.950292 containerd[1552]: 2025-01-30 13:43:29.935 [INFO][4240] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-jfc2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:29.950292 containerd[1552]: 2025-01-30 13:43:29.935 [INFO][4240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25a42e66813 ContainerID="d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-jfc2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:29.950292 containerd[1552]: 2025-01-30 13:43:29.938 [INFO][4240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-jfc2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:29.950292 containerd[1552]: 2025-01-30 13:43:29.938 [INFO][4240] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-jfc2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0", GenerateName:"calico-apiserver-7db98ddb54-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f4f520e-64e4-4078-a2f8-c4c75525a5da", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db98ddb54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f", Pod:"calico-apiserver-7db98ddb54-jfc2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25a42e66813", MAC:"4a:01:cb:c7:f0:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:29.950292 containerd[1552]: 2025-01-30 13:43:29.945 [INFO][4240] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-jfc2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:29.980047 containerd[1552]: time="2025-01-30T13:43:29.979890268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:43:29.980047 containerd[1552]: time="2025-01-30T13:43:29.980025511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:43:29.980047 containerd[1552]: time="2025-01-30T13:43:29.980043976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:29.980240 containerd[1552]: time="2025-01-30T13:43:29.980148001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:30.007737 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:43:30.034551 containerd[1552]: time="2025-01-30T13:43:30.034414335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db98ddb54-jfc2r,Uid:7f4f520e-64e4-4078-a2f8-c4c75525a5da,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f\"" Jan 30 13:43:30.036344 containerd[1552]: time="2025-01-30T13:43:30.036309371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:43:30.506124 containerd[1552]: time="2025-01-30T13:43:30.506077764Z" level=info msg="StopPodSandbox for \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\"" Jan 30 13:43:30.507125 containerd[1552]: time="2025-01-30T13:43:30.506811781Z" level=info msg="StopPodSandbox for \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\"" Jan 30 13:43:30.843821 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:52988.service - OpenSSH per-connection server daemon (10.0.0.1:52988). Jan 30 13:43:30.876520 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 52988 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:30.878749 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:30.883594 systemd-logind[1532]: New session 11 of user core. Jan 30 13:43:30.889846 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:43:30.895781 containerd[1552]: 2025-01-30 13:43:30.720 [INFO][4355] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:30.895781 containerd[1552]: 2025-01-30 13:43:30.720 [INFO][4355] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" iface="eth0" netns="/var/run/netns/cni-4697cac5-b5d1-6254-a66f-75bd1181cfce" Jan 30 13:43:30.895781 containerd[1552]: 2025-01-30 13:43:30.721 [INFO][4355] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" iface="eth0" netns="/var/run/netns/cni-4697cac5-b5d1-6254-a66f-75bd1181cfce" Jan 30 13:43:30.895781 containerd[1552]: 2025-01-30 13:43:30.721 [INFO][4355] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" iface="eth0" netns="/var/run/netns/cni-4697cac5-b5d1-6254-a66f-75bd1181cfce" Jan 30 13:43:30.895781 containerd[1552]: 2025-01-30 13:43:30.721 [INFO][4355] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:30.895781 containerd[1552]: 2025-01-30 13:43:30.721 [INFO][4355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:30.895781 containerd[1552]: 2025-01-30 13:43:30.741 [INFO][4370] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" HandleID="k8s-pod-network.39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Workload="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:30.895781 containerd[1552]: 2025-01-30 13:43:30.741 [INFO][4370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:30.895781 containerd[1552]: 2025-01-30 13:43:30.741 [INFO][4370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:30.895781 containerd[1552]: 2025-01-30 13:43:30.878 [WARNING][4370] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" HandleID="k8s-pod-network.39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Workload="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:30.895781 containerd[1552]: 2025-01-30 13:43:30.878 [INFO][4370] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" HandleID="k8s-pod-network.39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Workload="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:30.895781 containerd[1552]: 2025-01-30 13:43:30.885 [INFO][4370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:30.895781 containerd[1552]: 2025-01-30 13:43:30.888 [INFO][4355] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:30.898859 containerd[1552]: time="2025-01-30T13:43:30.898793188Z" level=info msg="TearDown network for sandbox \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\" successfully" Jan 30 13:43:30.898930 containerd[1552]: time="2025-01-30T13:43:30.898918514Z" level=info msg="StopPodSandbox for \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\" returns successfully" Jan 30 13:43:30.899295 kubelet[2738]: E0130 13:43:30.899268 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:30.899939 systemd[1]: run-netns-cni\x2d4697cac5\x2db5d1\x2d6254\x2da66f\x2d75bd1181cfce.mount: Deactivated successfully. Jan 30 13:43:30.900895 containerd[1552]: time="2025-01-30T13:43:30.900423438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tlsqp,Uid:e194480b-6ba7-4dbb-b599-88b607e62d55,Namespace:kube-system,Attempt:1,}" Jan 30 13:43:30.924521 containerd[1552]: 2025-01-30 13:43:30.879 [INFO][4354] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:30.924521 containerd[1552]: 2025-01-30 13:43:30.879 [INFO][4354] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" iface="eth0" netns="/var/run/netns/cni-76936847-ba56-1e06-113b-512d4d294a1a" Jan 30 13:43:30.924521 containerd[1552]: 2025-01-30 13:43:30.879 [INFO][4354] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" iface="eth0" netns="/var/run/netns/cni-76936847-ba56-1e06-113b-512d4d294a1a" Jan 30 13:43:30.924521 containerd[1552]: 2025-01-30 13:43:30.880 [INFO][4354] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" iface="eth0" netns="/var/run/netns/cni-76936847-ba56-1e06-113b-512d4d294a1a" Jan 30 13:43:30.924521 containerd[1552]: 2025-01-30 13:43:30.880 [INFO][4354] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:30.924521 containerd[1552]: 2025-01-30 13:43:30.880 [INFO][4354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:30.924521 containerd[1552]: 2025-01-30 13:43:30.907 [INFO][4379] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" HandleID="k8s-pod-network.f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Workload="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:30.924521 containerd[1552]: 2025-01-30 13:43:30.907 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:30.924521 containerd[1552]: 2025-01-30 13:43:30.907 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:30.924521 containerd[1552]: 2025-01-30 13:43:30.913 [WARNING][4379] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" HandleID="k8s-pod-network.f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Workload="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:30.924521 containerd[1552]: 2025-01-30 13:43:30.913 [INFO][4379] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" HandleID="k8s-pod-network.f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Workload="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:30.924521 containerd[1552]: 2025-01-30 13:43:30.914 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:30.924521 containerd[1552]: 2025-01-30 13:43:30.917 [INFO][4354] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:30.925333 containerd[1552]: time="2025-01-30T13:43:30.925191434Z" level=info msg="TearDown network for sandbox \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\" successfully" Jan 30 13:43:30.925333 containerd[1552]: time="2025-01-30T13:43:30.925228935Z" level=info msg="StopPodSandbox for \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\" returns successfully" Jan 30 13:43:30.927515 containerd[1552]: time="2025-01-30T13:43:30.927300544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9jzb,Uid:72a87a81-6fc8-4427-8a91-308c02047854,Namespace:calico-system,Attempt:1,}" Jan 30 13:43:30.928717 systemd[1]: run-netns-cni\x2d76936847\x2dba56\x2d1e06\x2d113b\x2d512d4d294a1a.mount: Deactivated successfully. Jan 30 13:43:31.038618 systemd-networkd[1240]: cali302955e2184: Link UP Jan 30 13:43:31.038908 systemd-networkd[1240]: cali302955e2184: Gained carrier Jan 30 13:43:31.039744 sshd[4377]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:31.048789 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:53002.service - OpenSSH per-connection server daemon (10.0.0.1:53002). Jan 30 13:43:31.049310 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:52988.service: Deactivated successfully. Jan 30 13:43:31.051283 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:43:31.052057 systemd-logind[1532]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:43:31.054362 systemd-logind[1532]: Removed session 11. Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:30.953 [INFO][4389] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0 coredns-7db6d8ff4d- kube-system e194480b-6ba7-4dbb-b599-88b607e62d55 859 0 2025-01-30 13:42:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-tlsqp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali302955e2184 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlsqp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlsqp-" Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:30.953 [INFO][4389] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlsqp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:30.988 [INFO][4425] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" HandleID="k8s-pod-network.6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" Workload="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:30.996 [INFO][4425] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" HandleID="k8s-pod-network.6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" Workload="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d090), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-tlsqp", "timestamp":"2025-01-30 13:43:30.988154719 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:30.996 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:30.997 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:30.997 [INFO][4425] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:30.998 [INFO][4425] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" host="localhost" Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:31.003 [INFO][4425] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:31.007 [INFO][4425] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:31.009 [INFO][4425] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:31.014 [INFO][4425] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:31.014 [INFO][4425] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" host="localhost" Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:31.017 [INFO][4425] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954 Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:31.023 [INFO][4425] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" host="localhost" Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:31.032 [INFO][4425] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" host="localhost" Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:31.032 [INFO][4425] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" host="localhost" Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:31.032 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:31.077097 containerd[1552]: 2025-01-30 13:43:31.032 [INFO][4425] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" HandleID="k8s-pod-network.6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" Workload="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:31.078274 containerd[1552]: 2025-01-30 13:43:31.035 [INFO][4389] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlsqp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e194480b-6ba7-4dbb-b599-88b607e62d55", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-tlsqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali302955e2184", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:31.078274 containerd[1552]: 2025-01-30 13:43:31.036 [INFO][4389] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlsqp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:31.078274 containerd[1552]: 2025-01-30 13:43:31.036 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali302955e2184 ContainerID="6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlsqp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:31.078274 containerd[1552]: 2025-01-30 13:43:31.038 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlsqp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:31.078274 containerd[1552]: 2025-01-30 13:43:31.056 [INFO][4389] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlsqp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e194480b-6ba7-4dbb-b599-88b607e62d55", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954", Pod:"coredns-7db6d8ff4d-tlsqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali302955e2184", MAC:"a6:0a:5d:94:d9:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:31.078274 containerd[1552]: 2025-01-30 13:43:31.073 [INFO][4389] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tlsqp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:31.080785 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 53002 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:31.082791 sshd[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:31.088940 systemd-logind[1532]: New session 12 of user core. Jan 30 13:43:31.094963 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:43:31.102719 systemd-networkd[1240]: cali318c7718c17: Link UP Jan 30 13:43:31.103322 systemd-networkd[1240]: cali318c7718c17: Gained carrier Jan 30 13:43:31.114332 containerd[1552]: time="2025-01-30T13:43:31.113861573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:43:31.114332 containerd[1552]: time="2025-01-30T13:43:31.113932856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:43:31.114332 containerd[1552]: time="2025-01-30T13:43:31.113950950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:31.114464 containerd[1552]: time="2025-01-30T13:43:31.114334300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:30.973 [INFO][4404] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--w9jzb-eth0 csi-node-driver- calico-system 72a87a81-6fc8-4427-8a91-308c02047854 860 0 2025-01-30 13:43:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-w9jzb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali318c7718c17 [] []}} ContainerID="93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" Namespace="calico-system" Pod="csi-node-driver-w9jzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--w9jzb-" Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:30.973 [INFO][4404] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" Namespace="calico-system" Pod="csi-node-driver-w9jzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.010 [INFO][4434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" HandleID="k8s-pod-network.93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" Workload="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.023 [INFO][4434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" HandleID="k8s-pod-network.93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" Workload="localhost-k8s-csi--node--driver--w9jzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027e5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-w9jzb", "timestamp":"2025-01-30 13:43:31.010120935 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.023 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.032 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.032 [INFO][4434] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.035 [INFO][4434] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" host="localhost" Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.043 [INFO][4434] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.073 [INFO][4434] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.075 [INFO][4434] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.078 [INFO][4434] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.078 [INFO][4434] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" host="localhost" Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.079 [INFO][4434] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5 Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.088 [INFO][4434] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" host="localhost" Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.093 [INFO][4434] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" host="localhost" Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.093 [INFO][4434] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" host="localhost" Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.093 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:31.118549 containerd[1552]: 2025-01-30 13:43:31.093 [INFO][4434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" HandleID="k8s-pod-network.93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" Workload="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:31.119044 containerd[1552]: 2025-01-30 13:43:31.097 [INFO][4404] cni-plugin/k8s.go 386: Populated endpoint ContainerID="93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" Namespace="calico-system" Pod="csi-node-driver-w9jzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--w9jzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w9jzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72a87a81-6fc8-4427-8a91-308c02047854", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-w9jzb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali318c7718c17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:31.119044 containerd[1552]: 2025-01-30 13:43:31.097 [INFO][4404] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" Namespace="calico-system" Pod="csi-node-driver-w9jzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:31.119044 containerd[1552]: 2025-01-30 13:43:31.097 [INFO][4404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali318c7718c17 ContainerID="93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" Namespace="calico-system" Pod="csi-node-driver-w9jzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:31.119044 containerd[1552]: 2025-01-30 13:43:31.102 [INFO][4404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" Namespace="calico-system" Pod="csi-node-driver-w9jzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:31.119044 containerd[1552]: 2025-01-30 13:43:31.102 [INFO][4404] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" Namespace="calico-system" Pod="csi-node-driver-w9jzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--w9jzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w9jzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72a87a81-6fc8-4427-8a91-308c02047854", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5", Pod:"csi-node-driver-w9jzb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali318c7718c17", MAC:"4a:66:01:17:7b:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:31.119044 containerd[1552]: 2025-01-30 13:43:31.114 [INFO][4404] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5" Namespace="calico-system" Pod="csi-node-driver-w9jzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:31.119655 systemd-networkd[1240]: cali25a42e66813: Gained IPv6LL Jan 30 13:43:31.143249 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:43:31.169925 containerd[1552]: time="2025-01-30T13:43:31.169883914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tlsqp,Uid:e194480b-6ba7-4dbb-b599-88b607e62d55,Namespace:kube-system,Attempt:1,} returns sandbox id \"6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954\"" Jan 30 13:43:31.170873 kubelet[2738]: E0130 13:43:31.170802 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:31.174461 containerd[1552]: time="2025-01-30T13:43:31.174413164Z" level=info msg="CreateContainer within sandbox \"6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:43:31.207582 containerd[1552]: time="2025-01-30T13:43:31.207479953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:43:31.208479 containerd[1552]: time="2025-01-30T13:43:31.208403626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:43:31.208479 containerd[1552]: time="2025-01-30T13:43:31.208448911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:31.208769 containerd[1552]: time="2025-01-30T13:43:31.208704672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:31.236378 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:43:31.249703 containerd[1552]: time="2025-01-30T13:43:31.249641491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9jzb,Uid:72a87a81-6fc8-4427-8a91-308c02047854,Namespace:calico-system,Attempt:1,} returns sandbox id \"93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5\"" Jan 30 13:43:31.289150 sshd[4443]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:31.292384 containerd[1552]: time="2025-01-30T13:43:31.292348984Z" level=info msg="CreateContainer within sandbox \"6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"82b2974f6354fdc5346b2a9ce273979d5623945131cceec7f5d1e640e30af94e\"" Jan 30 13:43:31.292898 containerd[1552]: time="2025-01-30T13:43:31.292873277Z" level=info msg="StartContainer for \"82b2974f6354fdc5346b2a9ce273979d5623945131cceec7f5d1e640e30af94e\"" Jan 30 13:43:31.298801 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:53006.service - OpenSSH per-connection server daemon (10.0.0.1:53006). Jan 30 13:43:31.299344 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:53002.service: Deactivated successfully. Jan 30 13:43:31.310125 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:43:31.314835 systemd-logind[1532]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:43:31.316822 systemd-logind[1532]: Removed session 12. Jan 30 13:43:31.333340 sshd[4566]: Accepted publickey for core from 10.0.0.1 port 53006 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:31.335240 sshd[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:31.339955 systemd-logind[1532]: New session 13 of user core. Jan 30 13:43:31.346878 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:43:31.360958 containerd[1552]: time="2025-01-30T13:43:31.360925386Z" level=info msg="StartContainer for \"82b2974f6354fdc5346b2a9ce273979d5623945131cceec7f5d1e640e30af94e\" returns successfully" Jan 30 13:43:31.467011 sshd[4566]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:31.470978 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:53006.service: Deactivated successfully. Jan 30 13:43:31.473617 systemd-logind[1532]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:43:31.473757 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:43:31.475075 systemd-logind[1532]: Removed session 13. Jan 30 13:43:31.669087 kubelet[2738]: E0130 13:43:31.668821 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:32.507299 containerd[1552]: time="2025-01-30T13:43:32.506765824Z" level=info msg="StopPodSandbox for \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\"" Jan 30 13:43:32.517216 containerd[1552]: time="2025-01-30T13:43:32.517172774Z" level=info msg="StopPodSandbox for \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\"" Jan 30 13:43:32.591657 systemd-networkd[1240]: cali302955e2184: Gained IPv6LL Jan 30 13:43:32.678611 kubelet[2738]: E0130 13:43:32.678242 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:32.710102 kubelet[2738]: I0130 13:43:32.710020 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tlsqp" podStartSLOduration=36.709997826 podStartE2EDuration="36.709997826s" podCreationTimestamp="2025-01-30 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:43:31.686040282 +0000 UTC m=+49.251792354" watchObservedRunningTime="2025-01-30 13:43:32.709997826 +0000 UTC m=+50.275749898" Jan 30 13:43:32.785642 systemd-networkd[1240]: cali318c7718c17: Gained IPv6LL Jan 30 13:43:32.796252 containerd[1552]: time="2025-01-30T13:43:32.796200421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:32.797167 containerd[1552]: time="2025-01-30T13:43:32.797068670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:43:32.798297 containerd[1552]: time="2025-01-30T13:43:32.798244436Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:32.803412 containerd[1552]: time="2025-01-30T13:43:32.802422197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:32.803412 containerd[1552]: time="2025-01-30T13:43:32.802978912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.766631779s" Jan 30 13:43:32.803412 containerd[1552]: time="2025-01-30T13:43:32.803022484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:43:32.807022 containerd[1552]: time="2025-01-30T13:43:32.806827465Z" level=info msg="CreateContainer within sandbox \"d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:43:32.807762 containerd[1552]: time="2025-01-30T13:43:32.807373358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:43:32.807762 containerd[1552]: 2025-01-30 13:43:32.709 [INFO][4645] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:32.807762 containerd[1552]: 2025-01-30 13:43:32.758 [INFO][4645] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" iface="eth0" netns="/var/run/netns/cni-b730d6cb-adbb-7cba-fe39-f182c5ae9023" Jan 30 13:43:32.807762 containerd[1552]: 2025-01-30 13:43:32.759 [INFO][4645] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" iface="eth0" netns="/var/run/netns/cni-b730d6cb-adbb-7cba-fe39-f182c5ae9023" Jan 30 13:43:32.807762 containerd[1552]: 2025-01-30 13:43:32.759 [INFO][4645] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" iface="eth0" netns="/var/run/netns/cni-b730d6cb-adbb-7cba-fe39-f182c5ae9023" Jan 30 13:43:32.807762 containerd[1552]: 2025-01-30 13:43:32.759 [INFO][4645] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:32.807762 containerd[1552]: 2025-01-30 13:43:32.759 [INFO][4645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:32.807762 containerd[1552]: 2025-01-30 13:43:32.793 [INFO][4676] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" HandleID="k8s-pod-network.4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Workload="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:32.807762 containerd[1552]: 2025-01-30 13:43:32.794 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:32.807762 containerd[1552]: 2025-01-30 13:43:32.794 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:32.807762 containerd[1552]: 2025-01-30 13:43:32.799 [WARNING][4676] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" HandleID="k8s-pod-network.4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Workload="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:32.807762 containerd[1552]: 2025-01-30 13:43:32.799 [INFO][4676] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" HandleID="k8s-pod-network.4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Workload="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:32.807762 containerd[1552]: 2025-01-30 13:43:32.801 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:32.807762 containerd[1552]: 2025-01-30 13:43:32.804 [INFO][4645] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:32.809202 containerd[1552]: time="2025-01-30T13:43:32.808885636Z" level=info msg="TearDown network for sandbox \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\" successfully" Jan 30 13:43:32.809202 containerd[1552]: time="2025-01-30T13:43:32.808911035Z" level=info msg="StopPodSandbox for \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\" returns successfully" Jan 30 13:43:32.809997 containerd[1552]: time="2025-01-30T13:43:32.809542339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db98ddb54-mkc94,Uid:7ed2fe40-9b9d-4c9d-9105-b15821825523,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:43:32.812808 systemd[1]: run-netns-cni\x2db730d6cb\x2dadbb\x2d7cba\x2dfe39\x2df182c5ae9023.mount: Deactivated successfully. Jan 30 13:43:32.875395 containerd[1552]: 2025-01-30 13:43:32.790 [INFO][4666] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:32.875395 containerd[1552]: 2025-01-30 13:43:32.791 [INFO][4666] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" iface="eth0" netns="/var/run/netns/cni-d27522a2-f52c-702e-47fe-d4af8a737afa" Jan 30 13:43:32.875395 containerd[1552]: 2025-01-30 13:43:32.791 [INFO][4666] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" iface="eth0" netns="/var/run/netns/cni-d27522a2-f52c-702e-47fe-d4af8a737afa" Jan 30 13:43:32.875395 containerd[1552]: 2025-01-30 13:43:32.791 [INFO][4666] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" iface="eth0" netns="/var/run/netns/cni-d27522a2-f52c-702e-47fe-d4af8a737afa" Jan 30 13:43:32.875395 containerd[1552]: 2025-01-30 13:43:32.791 [INFO][4666] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:32.875395 containerd[1552]: 2025-01-30 13:43:32.791 [INFO][4666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:32.875395 containerd[1552]: 2025-01-30 13:43:32.823 [INFO][4692] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" HandleID="k8s-pod-network.81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Workload="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:32.875395 containerd[1552]: 2025-01-30 13:43:32.823 [INFO][4692] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:32.875395 containerd[1552]: 2025-01-30 13:43:32.823 [INFO][4692] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:32.875395 containerd[1552]: 2025-01-30 13:43:32.868 [WARNING][4692] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" HandleID="k8s-pod-network.81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Workload="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:32.875395 containerd[1552]: 2025-01-30 13:43:32.868 [INFO][4692] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" HandleID="k8s-pod-network.81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Workload="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:32.875395 containerd[1552]: 2025-01-30 13:43:32.869 [INFO][4692] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:32.875395 containerd[1552]: 2025-01-30 13:43:32.872 [INFO][4666] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:32.876149 containerd[1552]: time="2025-01-30T13:43:32.876092703Z" level=info msg="TearDown network for sandbox \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\" successfully" Jan 30 13:43:32.876149 containerd[1552]: time="2025-01-30T13:43:32.876142587Z" level=info msg="StopPodSandbox for \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\" returns successfully" Jan 30 13:43:32.877154 containerd[1552]: time="2025-01-30T13:43:32.877107728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b68d756c-dkqrp,Uid:176ea8e5-e830-4bc9-bb74-b55fbc8b7c09,Namespace:calico-system,Attempt:1,}" Jan 30 13:43:32.878459 systemd[1]: run-netns-cni\x2dd27522a2\x2df52c\x2d702e\x2d47fe\x2dd4af8a737afa.mount: Deactivated successfully. Jan 30 13:43:32.973159 containerd[1552]: time="2025-01-30T13:43:32.973119342Z" level=info msg="CreateContainer within sandbox \"d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2a05421e2fca45c0948ef239584d98755241aa19383c307118a5926dd5e2584d\"" Jan 30 13:43:32.974294 containerd[1552]: time="2025-01-30T13:43:32.973767288Z" level=info msg="StartContainer for \"2a05421e2fca45c0948ef239584d98755241aa19383c307118a5926dd5e2584d\"" Jan 30 13:43:33.062471 containerd[1552]: time="2025-01-30T13:43:33.061544883Z" level=info msg="StartContainer for \"2a05421e2fca45c0948ef239584d98755241aa19383c307118a5926dd5e2584d\" returns successfully" Jan 30 13:43:33.084715 systemd-networkd[1240]: cali26c9ae9b734: Link UP Jan 30 13:43:33.085623 systemd-networkd[1240]: cali26c9ae9b734: Gained carrier Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.013 [INFO][4705] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0 calico-apiserver-7db98ddb54- calico-apiserver 7ed2fe40-9b9d-4c9d-9105-b15821825523 899 0 2025-01-30 13:43:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7db98ddb54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7db98ddb54-mkc94 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali26c9ae9b734 [] []}} ContainerID="b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-mkc94" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-" Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.013 [INFO][4705] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-mkc94" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.043 [INFO][4757] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" HandleID="k8s-pod-network.b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.052 [INFO][4757] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" HandleID="k8s-pod-network.b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003091d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7db98ddb54-mkc94", "timestamp":"2025-01-30 13:43:33.043836182 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.052 [INFO][4757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.052 [INFO][4757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.052 [INFO][4757] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.054 [INFO][4757] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" host="localhost" Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.058 [INFO][4757] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.064 [INFO][4757] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.066 [INFO][4757] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.068 [INFO][4757] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.068 [INFO][4757] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" host="localhost" Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.069 [INFO][4757] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.073 [INFO][4757] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" host="localhost" Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.078 [INFO][4757] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" host="localhost" Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.078 [INFO][4757] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" host="localhost" Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.078 [INFO][4757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:33.098048 containerd[1552]: 2025-01-30 13:43:33.079 [INFO][4757] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" HandleID="k8s-pod-network.b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:33.100712 containerd[1552]: 2025-01-30 13:43:33.082 [INFO][4705] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-mkc94" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0", GenerateName:"calico-apiserver-7db98ddb54-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ed2fe40-9b9d-4c9d-9105-b15821825523", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db98ddb54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7db98ddb54-mkc94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali26c9ae9b734", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:33.100712 containerd[1552]: 2025-01-30 13:43:33.082 [INFO][4705] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-mkc94" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:33.100712 containerd[1552]: 2025-01-30 13:43:33.082 [INFO][4705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26c9ae9b734 ContainerID="b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-mkc94" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:33.100712 containerd[1552]: 2025-01-30 13:43:33.084 [INFO][4705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-mkc94" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:33.100712 containerd[1552]: 2025-01-30 13:43:33.084 [INFO][4705] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-mkc94" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0", GenerateName:"calico-apiserver-7db98ddb54-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ed2fe40-9b9d-4c9d-9105-b15821825523", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db98ddb54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f", Pod:"calico-apiserver-7db98ddb54-mkc94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali26c9ae9b734", MAC:"4a:d3:4a:54:66:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:33.100712 containerd[1552]: 2025-01-30 13:43:33.094 [INFO][4705] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f" Namespace="calico-apiserver" Pod="calico-apiserver-7db98ddb54-mkc94" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:33.123470 containerd[1552]: time="2025-01-30T13:43:33.123376978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:43:33.123470 containerd[1552]: time="2025-01-30T13:43:33.123449885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:43:33.123658 containerd[1552]: time="2025-01-30T13:43:33.123464372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:33.124228 containerd[1552]: time="2025-01-30T13:43:33.124086349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:33.127206 systemd-networkd[1240]: calif0cbb25be84: Link UP Jan 30 13:43:33.128635 systemd-networkd[1240]: calif0cbb25be84: Gained carrier Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.019 [INFO][4722] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0 calico-kube-controllers-55b68d756c- calico-system 176ea8e5-e830-4bc9-bb74-b55fbc8b7c09 907 0 2025-01-30 13:43:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55b68d756c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55b68d756c-dkqrp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif0cbb25be84 [] []}} ContainerID="be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" Namespace="calico-system" Pod="calico-kube-controllers-55b68d756c-dkqrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-" Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.019 [INFO][4722] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" Namespace="calico-system" Pod="calico-kube-controllers-55b68d756c-dkqrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.052 [INFO][4762] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" HandleID="k8s-pod-network.be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" Workload="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.060 [INFO][4762] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" HandleID="k8s-pod-network.be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" Workload="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309e90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55b68d756c-dkqrp", "timestamp":"2025-01-30 13:43:33.052711346 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.060 [INFO][4762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.079 [INFO][4762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.079 [INFO][4762] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.081 [INFO][4762] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" host="localhost" Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.087 [INFO][4762] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.095 [INFO][4762] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.097 [INFO][4762] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.101 [INFO][4762] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.101 [INFO][4762] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" host="localhost" Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.103 [INFO][4762] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635 Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.110 [INFO][4762] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" host="localhost" Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.119 [INFO][4762] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" host="localhost" Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.119 [INFO][4762] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" host="localhost" Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.119 [INFO][4762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:33.142389 containerd[1552]: 2025-01-30 13:43:33.119 [INFO][4762] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" HandleID="k8s-pod-network.be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" Workload="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:33.143199 containerd[1552]: 2025-01-30 13:43:33.122 [INFO][4722] cni-plugin/k8s.go 386: Populated endpoint ContainerID="be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" Namespace="calico-system" Pod="calico-kube-controllers-55b68d756c-dkqrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0", GenerateName:"calico-kube-controllers-55b68d756c-", Namespace:"calico-system", SelfLink:"", UID:"176ea8e5-e830-4bc9-bb74-b55fbc8b7c09", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b68d756c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55b68d756c-dkqrp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif0cbb25be84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:33.143199 containerd[1552]: 2025-01-30 13:43:33.122 [INFO][4722] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" Namespace="calico-system" Pod="calico-kube-controllers-55b68d756c-dkqrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:33.143199 containerd[1552]: 2025-01-30 13:43:33.123 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0cbb25be84 ContainerID="be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" Namespace="calico-system" Pod="calico-kube-controllers-55b68d756c-dkqrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:33.143199 containerd[1552]: 2025-01-30 13:43:33.128 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" Namespace="calico-system" Pod="calico-kube-controllers-55b68d756c-dkqrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:33.143199 containerd[1552]: 2025-01-30 13:43:33.129 [INFO][4722] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" Namespace="calico-system" Pod="calico-kube-controllers-55b68d756c-dkqrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0", GenerateName:"calico-kube-controllers-55b68d756c-", Namespace:"calico-system", SelfLink:"", UID:"176ea8e5-e830-4bc9-bb74-b55fbc8b7c09", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b68d756c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635", Pod:"calico-kube-controllers-55b68d756c-dkqrp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif0cbb25be84", MAC:"ee:1f:2f:44:d0:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:33.143199 containerd[1552]: 2025-01-30 13:43:33.137 [INFO][4722] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635" Namespace="calico-system" Pod="calico-kube-controllers-55b68d756c-dkqrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:33.151524 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:43:33.171634 containerd[1552]: time="2025-01-30T13:43:33.170397006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:43:33.171929 containerd[1552]: time="2025-01-30T13:43:33.171705301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:43:33.171929 containerd[1552]: time="2025-01-30T13:43:33.171726961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:33.171929 containerd[1552]: time="2025-01-30T13:43:33.171809476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:33.179631 containerd[1552]: time="2025-01-30T13:43:33.179579588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db98ddb54-mkc94,Uid:7ed2fe40-9b9d-4c9d-9105-b15821825523,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f\"" Jan 30 13:43:33.182811 containerd[1552]: time="2025-01-30T13:43:33.182766859Z" level=info msg="CreateContainer within sandbox \"b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:43:33.197073 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:43:33.199691 containerd[1552]: time="2025-01-30T13:43:33.199653138Z" level=info msg="CreateContainer within sandbox \"b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ce3070b7a9b71137ccbfc05d0e172e5a9676db23c7c36df1582fba0fb9c4d5b6\"" Jan 30 13:43:33.200250 containerd[1552]: time="2025-01-30T13:43:33.200211306Z" level=info msg="StartContainer for \"ce3070b7a9b71137ccbfc05d0e172e5a9676db23c7c36df1582fba0fb9c4d5b6\"" Jan 30 13:43:33.233014 containerd[1552]: time="2025-01-30T13:43:33.232908596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b68d756c-dkqrp,Uid:176ea8e5-e830-4bc9-bb74-b55fbc8b7c09,Namespace:calico-system,Attempt:1,} returns sandbox id \"be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635\"" Jan 30 13:43:33.276786 containerd[1552]: time="2025-01-30T13:43:33.276756149Z" level=info msg="StartContainer for \"ce3070b7a9b71137ccbfc05d0e172e5a9676db23c7c36df1582fba0fb9c4d5b6\" returns successfully" Jan 30 13:43:33.507247 containerd[1552]: time="2025-01-30T13:43:33.506625814Z" level=info msg="StopPodSandbox for \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\"" Jan 30 13:43:33.591285 containerd[1552]: 2025-01-30 13:43:33.553 [INFO][4950] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:33.591285 containerd[1552]: 2025-01-30 13:43:33.553 [INFO][4950] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" iface="eth0" netns="/var/run/netns/cni-ecd233de-4a84-a3e6-c69d-c1dc2c847e63" Jan 30 13:43:33.591285 containerd[1552]: 2025-01-30 13:43:33.554 [INFO][4950] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" iface="eth0" netns="/var/run/netns/cni-ecd233de-4a84-a3e6-c69d-c1dc2c847e63" Jan 30 13:43:33.591285 containerd[1552]: 2025-01-30 13:43:33.554 [INFO][4950] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" iface="eth0" netns="/var/run/netns/cni-ecd233de-4a84-a3e6-c69d-c1dc2c847e63" Jan 30 13:43:33.591285 containerd[1552]: 2025-01-30 13:43:33.554 [INFO][4950] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:33.591285 containerd[1552]: 2025-01-30 13:43:33.554 [INFO][4950] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:33.591285 containerd[1552]: 2025-01-30 13:43:33.577 [INFO][4958] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" HandleID="k8s-pod-network.a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Workload="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:33.591285 containerd[1552]: 2025-01-30 13:43:33.577 [INFO][4958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:33.591285 containerd[1552]: 2025-01-30 13:43:33.577 [INFO][4958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:33.591285 containerd[1552]: 2025-01-30 13:43:33.583 [WARNING][4958] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" HandleID="k8s-pod-network.a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Workload="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:33.591285 containerd[1552]: 2025-01-30 13:43:33.584 [INFO][4958] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" HandleID="k8s-pod-network.a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Workload="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:33.591285 containerd[1552]: 2025-01-30 13:43:33.585 [INFO][4958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:33.591285 containerd[1552]: 2025-01-30 13:43:33.588 [INFO][4950] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:33.597186 containerd[1552]: time="2025-01-30T13:43:33.597038552Z" level=info msg="TearDown network for sandbox \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\" successfully" Jan 30 13:43:33.597186 containerd[1552]: time="2025-01-30T13:43:33.597066614Z" level=info msg="StopPodSandbox for \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\" returns successfully" Jan 30 13:43:33.597521 kubelet[2738]: E0130 13:43:33.597476 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:33.598691 containerd[1552]: time="2025-01-30T13:43:33.598638123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kbbt2,Uid:1b15d90a-b342-46ab-afb0-2baf9fef6c45,Namespace:kube-system,Attempt:1,}" Jan 30 13:43:33.691365 kubelet[2738]: E0130 13:43:33.691327 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:33.711745 kubelet[2738]: I0130 13:43:33.711673 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7db98ddb54-jfc2r" podStartSLOduration=25.942719678 podStartE2EDuration="28.711650973s" podCreationTimestamp="2025-01-30 13:43:05 +0000 UTC" firstStartedPulling="2025-01-30 13:43:30.035844489 +0000 UTC m=+47.601596561" lastFinishedPulling="2025-01-30 13:43:32.804775784 +0000 UTC m=+50.370527856" observedRunningTime="2025-01-30 13:43:33.710989141 +0000 UTC m=+51.276741233" watchObservedRunningTime="2025-01-30 13:43:33.711650973 +0000 UTC m=+51.277403046" Jan 30 13:43:33.713265 kubelet[2738]: I0130 13:43:33.713224 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7db98ddb54-mkc94" podStartSLOduration=28.713214918 podStartE2EDuration="28.713214918s" podCreationTimestamp="2025-01-30 13:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:43:33.695019282 +0000 UTC m=+51.260771354" watchObservedRunningTime="2025-01-30 13:43:33.713214918 +0000 UTC m=+51.278967000" Jan 30 13:43:33.773682 systemd-networkd[1240]: cali144e44cafc2: Link UP Jan 30 13:43:33.777610 systemd-networkd[1240]: cali144e44cafc2: Gained carrier Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.683 [INFO][4966] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0 coredns-7db6d8ff4d- kube-system 1b15d90a-b342-46ab-afb0-2baf9fef6c45 924 0 2025-01-30 13:42:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-kbbt2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali144e44cafc2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kbbt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kbbt2-" Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.684 [INFO][4966] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kbbt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.728 [INFO][4979] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" HandleID="k8s-pod-network.b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" Workload="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.735 [INFO][4979] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" HandleID="k8s-pod-network.b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" Workload="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000289a70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-kbbt2", "timestamp":"2025-01-30 13:43:33.728365951 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.735 [INFO][4979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.735 [INFO][4979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.735 [INFO][4979] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.737 [INFO][4979] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" host="localhost" Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.741 [INFO][4979] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.745 [INFO][4979] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.746 [INFO][4979] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.749 [INFO][4979] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.749 [INFO][4979] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" host="localhost" Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.752 [INFO][4979] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169 Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.756 [INFO][4979] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" host="localhost" Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.762 [INFO][4979] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" host="localhost" Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.762 [INFO][4979] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" host="localhost" Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.763 [INFO][4979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:33.796471 containerd[1552]: 2025-01-30 13:43:33.763 [INFO][4979] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" HandleID="k8s-pod-network.b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" Workload="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:33.797265 containerd[1552]: 2025-01-30 13:43:33.766 [INFO][4966] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kbbt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1b15d90a-b342-46ab-afb0-2baf9fef6c45", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-kbbt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali144e44cafc2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:33.797265 containerd[1552]: 2025-01-30 13:43:33.768 [INFO][4966] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kbbt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:33.797265 containerd[1552]: 2025-01-30 13:43:33.769 [INFO][4966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali144e44cafc2 ContainerID="b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kbbt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:33.797265 containerd[1552]: 2025-01-30 13:43:33.774 [INFO][4966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kbbt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:33.797265 containerd[1552]: 2025-01-30 13:43:33.776 [INFO][4966] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kbbt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1b15d90a-b342-46ab-afb0-2baf9fef6c45", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169", Pod:"coredns-7db6d8ff4d-kbbt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali144e44cafc2", MAC:"d2:61:4d:46:50:be", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:33.797265 containerd[1552]: 2025-01-30 13:43:33.789 [INFO][4966] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kbbt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:33.844905 containerd[1552]: time="2025-01-30T13:43:33.844776828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:43:33.844905 containerd[1552]: time="2025-01-30T13:43:33.844844555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:43:33.844905 containerd[1552]: time="2025-01-30T13:43:33.844868480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:33.847091 containerd[1552]: time="2025-01-30T13:43:33.844976452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:33.888095 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:43:33.889121 systemd[1]: run-netns-cni\x2decd233de\x2d4a84\x2da3e6\x2dc69d\x2dc1dc2c847e63.mount: Deactivated successfully. Jan 30 13:43:33.920810 containerd[1552]: time="2025-01-30T13:43:33.920735773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kbbt2,Uid:1b15d90a-b342-46ab-afb0-2baf9fef6c45,Namespace:kube-system,Attempt:1,} returns sandbox id \"b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169\"" Jan 30 13:43:33.921611 kubelet[2738]: E0130 13:43:33.921576 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:33.923989 containerd[1552]: time="2025-01-30T13:43:33.923948542Z" level=info msg="CreateContainer within sandbox \"b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:43:34.080041 containerd[1552]: time="2025-01-30T13:43:34.079908670Z" level=info msg="CreateContainer within sandbox \"b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7dc477e5aff9ba313eef120e37e67db018c55b435f6fbce21016c1119e0b8785\"" Jan 30 13:43:34.081646 containerd[1552]: time="2025-01-30T13:43:34.080742034Z" level=info msg="StartContainer for \"7dc477e5aff9ba313eef120e37e67db018c55b435f6fbce21016c1119e0b8785\"" Jan 30 13:43:34.179467 containerd[1552]: time="2025-01-30T13:43:34.179425015Z" level=info msg="StartContainer for \"7dc477e5aff9ba313eef120e37e67db018c55b435f6fbce21016c1119e0b8785\" returns successfully" Jan 30 13:43:34.191646 systemd-networkd[1240]: cali26c9ae9b734: Gained IPv6LL Jan 30 13:43:34.255756 systemd-networkd[1240]: calif0cbb25be84: Gained IPv6LL Jan 30 13:43:34.390842 containerd[1552]: time="2025-01-30T13:43:34.390680601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:34.391680 containerd[1552]: time="2025-01-30T13:43:34.391647305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:43:34.392953 containerd[1552]: time="2025-01-30T13:43:34.392900977Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:34.395627 containerd[1552]: time="2025-01-30T13:43:34.395585855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:34.396370 containerd[1552]: time="2025-01-30T13:43:34.396319352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.588914543s" Jan 30 13:43:34.396370 containerd[1552]: time="2025-01-30T13:43:34.396365538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:43:34.398174 containerd[1552]: time="2025-01-30T13:43:34.397982893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:43:34.400001 containerd[1552]: time="2025-01-30T13:43:34.399954082Z" level=info msg="CreateContainer within sandbox \"93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:43:34.417375 containerd[1552]: time="2025-01-30T13:43:34.417334427Z" level=info msg="CreateContainer within sandbox \"93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0b36d5e7a80846c608180b05854d1046ce215b00c760b8a4ae66436363d08042\"" Jan 30 13:43:34.418981 containerd[1552]: time="2025-01-30T13:43:34.418029542Z" level=info msg="StartContainer for \"0b36d5e7a80846c608180b05854d1046ce215b00c760b8a4ae66436363d08042\"" Jan 30 13:43:34.482307 containerd[1552]: time="2025-01-30T13:43:34.482255173Z" level=info msg="StartContainer for \"0b36d5e7a80846c608180b05854d1046ce215b00c760b8a4ae66436363d08042\" returns successfully" Jan 30 13:43:34.694425 kubelet[2738]: E0130 13:43:34.694375 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:34.696201 kubelet[2738]: I0130 13:43:34.696171 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:43:34.697132 kubelet[2738]: E0130 13:43:34.697106 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:34.705832 kubelet[2738]: I0130 13:43:34.705648 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kbbt2" podStartSLOduration=38.705630101 podStartE2EDuration="38.705630101s" podCreationTimestamp="2025-01-30 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:43:34.705344535 +0000 UTC m=+52.271096617" watchObservedRunningTime="2025-01-30 13:43:34.705630101 +0000 UTC m=+52.271382173" Jan 30 13:43:35.215759 systemd-networkd[1240]: cali144e44cafc2: Gained IPv6LL Jan 30 13:43:35.698968 kubelet[2738]: E0130 13:43:35.698933 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:36.475847 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:53018.service - OpenSSH per-connection server daemon (10.0.0.1:53018). Jan 30 13:43:36.520761 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 53018 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:36.523327 sshd[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:36.529570 systemd-logind[1532]: New session 14 of user core. Jan 30 13:43:36.535850 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:43:36.685227 sshd[5136]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:36.690660 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:53018.service: Deactivated successfully. Jan 30 13:43:36.694246 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:43:36.695025 systemd-logind[1532]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:43:36.696252 systemd-logind[1532]: Removed session 14. Jan 30 13:43:36.700980 kubelet[2738]: E0130 13:43:36.700589 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:37.194065 containerd[1552]: time="2025-01-30T13:43:37.193999549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:37.195579 containerd[1552]: time="2025-01-30T13:43:37.195519461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:43:37.197556 containerd[1552]: time="2025-01-30T13:43:37.197446627Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:37.200628 containerd[1552]: time="2025-01-30T13:43:37.200572813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:37.201426 containerd[1552]: time="2025-01-30T13:43:37.201383374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.803294242s" Jan 30 13:43:37.201494 containerd[1552]: time="2025-01-30T13:43:37.201430282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:43:37.202976 containerd[1552]: time="2025-01-30T13:43:37.202804260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:43:37.213019 containerd[1552]: time="2025-01-30T13:43:37.212976076Z" level=info msg="CreateContainer within sandbox \"be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:43:37.238839 containerd[1552]: time="2025-01-30T13:43:37.238789660Z" level=info msg="CreateContainer within sandbox \"be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6a56910a1a3f22200c3fdba6166684dc90c559eeff4a9a4e15d309cbc08272e3\"" Jan 30 13:43:37.239422 containerd[1552]: time="2025-01-30T13:43:37.239375138Z" level=info msg="StartContainer for \"6a56910a1a3f22200c3fdba6166684dc90c559eeff4a9a4e15d309cbc08272e3\"" Jan 30 13:43:37.312453 containerd[1552]: time="2025-01-30T13:43:37.312395057Z" level=info msg="StartContainer for \"6a56910a1a3f22200c3fdba6166684dc90c559eeff4a9a4e15d309cbc08272e3\" returns successfully" Jan 30 13:43:37.764809 kubelet[2738]: I0130 13:43:37.764718 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55b68d756c-dkqrp" podStartSLOduration=28.796303353 podStartE2EDuration="32.764687943s" podCreationTimestamp="2025-01-30 13:43:05 +0000 UTC" firstStartedPulling="2025-01-30 13:43:33.234261684 +0000 UTC m=+50.800013756" lastFinishedPulling="2025-01-30 13:43:37.202646274 +0000 UTC m=+54.768398346" observedRunningTime="2025-01-30 13:43:37.751521338 +0000 UTC m=+55.317273410" watchObservedRunningTime="2025-01-30 13:43:37.764687943 +0000 UTC m=+55.330440015" Jan 30 13:43:38.065950 kubelet[2738]: I0130 13:43:38.065796 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:43:40.625796 containerd[1552]: time="2025-01-30T13:43:40.625737454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:40.700285 containerd[1552]: time="2025-01-30T13:43:40.700200622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:43:40.744428 containerd[1552]: time="2025-01-30T13:43:40.744378852Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:40.791341 containerd[1552]: time="2025-01-30T13:43:40.791269669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:40.792004 containerd[1552]: time="2025-01-30T13:43:40.791977021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.58914548s" Jan 30 13:43:40.792077 containerd[1552]: time="2025-01-30T13:43:40.792008460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:43:40.793770 containerd[1552]: time="2025-01-30T13:43:40.793750060Z" level=info msg="CreateContainer within sandbox \"93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:43:41.409988 containerd[1552]: time="2025-01-30T13:43:41.409915664Z" level=info msg="CreateContainer within sandbox \"93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5e2a06a5cdf3ce5656515bb5f8eaba658b02d0b8bda46c291ac178acd9634a56\"" Jan 30 13:43:41.410598 containerd[1552]: time="2025-01-30T13:43:41.410521317Z" level=info msg="StartContainer for \"5e2a06a5cdf3ce5656515bb5f8eaba658b02d0b8bda46c291ac178acd9634a56\"" Jan 30 13:43:41.595881 kubelet[2738]: I0130 13:43:41.595813 2738 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:43:41.595881 kubelet[2738]: I0130 13:43:41.595861 2738 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:43:41.642986 containerd[1552]: time="2025-01-30T13:43:41.642929087Z" level=info msg="StartContainer for \"5e2a06a5cdf3ce5656515bb5f8eaba658b02d0b8bda46c291ac178acd9634a56\" returns successfully" Jan 30 13:43:41.696816 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:45328.service - OpenSSH per-connection server daemon (10.0.0.1:45328). Jan 30 13:43:41.735852 sshd[5252]: Accepted publickey for core from 10.0.0.1 port 45328 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:41.737775 sshd[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:41.742075 systemd-logind[1532]: New session 15 of user core. Jan 30 13:43:41.751737 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:43:41.894438 kubelet[2738]: I0130 13:43:41.894360 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w9jzb" podStartSLOduration=27.352490464 podStartE2EDuration="36.894338582s" podCreationTimestamp="2025-01-30 13:43:05 +0000 UTC" firstStartedPulling="2025-01-30 13:43:31.25078614 +0000 UTC m=+48.816538212" lastFinishedPulling="2025-01-30 13:43:40.792634258 +0000 UTC m=+58.358386330" observedRunningTime="2025-01-30 13:43:41.892719895 +0000 UTC m=+59.458471998" watchObservedRunningTime="2025-01-30 13:43:41.894338582 +0000 UTC m=+59.460090654" Jan 30 13:43:42.064131 sshd[5252]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:42.068460 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:45328.service: Deactivated successfully. Jan 30 13:43:42.071065 systemd-logind[1532]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:43:42.071172 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:43:42.072338 systemd-logind[1532]: Removed session 15. Jan 30 13:43:42.494726 containerd[1552]: time="2025-01-30T13:43:42.494681618Z" level=info msg="StopPodSandbox for \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\"" Jan 30 13:43:42.562895 containerd[1552]: 2025-01-30 13:43:42.531 [WARNING][5282] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0", GenerateName:"calico-apiserver-7db98ddb54-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ed2fe40-9b9d-4c9d-9105-b15821825523", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db98ddb54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f", Pod:"calico-apiserver-7db98ddb54-mkc94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali26c9ae9b734", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:42.562895 containerd[1552]: 2025-01-30 13:43:42.531 [INFO][5282] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:42.562895 containerd[1552]: 2025-01-30 13:43:42.531 [INFO][5282] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" iface="eth0" netns="" Jan 30 13:43:42.562895 containerd[1552]: 2025-01-30 13:43:42.531 [INFO][5282] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:42.562895 containerd[1552]: 2025-01-30 13:43:42.531 [INFO][5282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:42.562895 containerd[1552]: 2025-01-30 13:43:42.551 [INFO][5292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" HandleID="k8s-pod-network.4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Workload="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:42.562895 containerd[1552]: 2025-01-30 13:43:42.551 [INFO][5292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:42.562895 containerd[1552]: 2025-01-30 13:43:42.551 [INFO][5292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:42.562895 containerd[1552]: 2025-01-30 13:43:42.556 [WARNING][5292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" HandleID="k8s-pod-network.4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Workload="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:42.562895 containerd[1552]: 2025-01-30 13:43:42.556 [INFO][5292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" HandleID="k8s-pod-network.4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Workload="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:42.562895 containerd[1552]: 2025-01-30 13:43:42.557 [INFO][5292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:42.562895 containerd[1552]: 2025-01-30 13:43:42.559 [INFO][5282] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:42.563450 containerd[1552]: time="2025-01-30T13:43:42.562921251Z" level=info msg="TearDown network for sandbox \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\" successfully" Jan 30 13:43:42.563450 containerd[1552]: time="2025-01-30T13:43:42.562943855Z" level=info msg="StopPodSandbox for \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\" returns successfully" Jan 30 13:43:42.578666 containerd[1552]: time="2025-01-30T13:43:42.578585453Z" level=info msg="RemovePodSandbox for \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\"" Jan 30 13:43:42.584897 containerd[1552]: time="2025-01-30T13:43:42.584865530Z" level=info msg="Forcibly stopping sandbox \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\"" Jan 30 13:43:42.656036 containerd[1552]: 2025-01-30 13:43:42.620 [WARNING][5314] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0", GenerateName:"calico-apiserver-7db98ddb54-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ed2fe40-9b9d-4c9d-9105-b15821825523", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db98ddb54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b06e6c7c54668ce7b6abea20345c1fd9f58b49a31fc5a33e1dab07d6f445210f", Pod:"calico-apiserver-7db98ddb54-mkc94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali26c9ae9b734", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:42.656036 containerd[1552]: 2025-01-30 13:43:42.620 [INFO][5314] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:42.656036 containerd[1552]: 2025-01-30 13:43:42.620 [INFO][5314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" iface="eth0" netns="" Jan 30 13:43:42.656036 containerd[1552]: 2025-01-30 13:43:42.620 [INFO][5314] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:42.656036 containerd[1552]: 2025-01-30 13:43:42.620 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:42.656036 containerd[1552]: 2025-01-30 13:43:42.644 [INFO][5322] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" HandleID="k8s-pod-network.4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Workload="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:42.656036 containerd[1552]: 2025-01-30 13:43:42.644 [INFO][5322] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:42.656036 containerd[1552]: 2025-01-30 13:43:42.644 [INFO][5322] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:42.656036 containerd[1552]: 2025-01-30 13:43:42.650 [WARNING][5322] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" HandleID="k8s-pod-network.4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Workload="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:42.656036 containerd[1552]: 2025-01-30 13:43:42.650 [INFO][5322] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" HandleID="k8s-pod-network.4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Workload="localhost-k8s-calico--apiserver--7db98ddb54--mkc94-eth0" Jan 30 13:43:42.656036 containerd[1552]: 2025-01-30 13:43:42.651 [INFO][5322] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:42.656036 containerd[1552]: 2025-01-30 13:43:42.653 [INFO][5314] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70" Jan 30 13:43:42.661465 containerd[1552]: time="2025-01-30T13:43:42.656081013Z" level=info msg="TearDown network for sandbox \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\" successfully" Jan 30 13:43:42.741780 containerd[1552]: time="2025-01-30T13:43:42.741719075Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:43:42.741917 containerd[1552]: time="2025-01-30T13:43:42.741812246Z" level=info msg="RemovePodSandbox \"4dd8006ee7b329fa0cce051a09474eb26c32395c6c2782e357467c75a0458b70\" returns successfully" Jan 30 13:43:42.742392 containerd[1552]: time="2025-01-30T13:43:42.742360908Z" level=info msg="StopPodSandbox for \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\"" Jan 30 13:43:42.808647 containerd[1552]: 2025-01-30 13:43:42.776 [WARNING][5344] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0", GenerateName:"calico-apiserver-7db98ddb54-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f4f520e-64e4-4078-a2f8-c4c75525a5da", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db98ddb54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f", Pod:"calico-apiserver-7db98ddb54-jfc2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25a42e66813", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:42.808647 containerd[1552]: 2025-01-30 13:43:42.776 [INFO][5344] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:42.808647 containerd[1552]: 2025-01-30 13:43:42.776 [INFO][5344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" iface="eth0" netns="" Jan 30 13:43:42.808647 containerd[1552]: 2025-01-30 13:43:42.776 [INFO][5344] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:42.808647 containerd[1552]: 2025-01-30 13:43:42.776 [INFO][5344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:42.808647 containerd[1552]: 2025-01-30 13:43:42.795 [INFO][5352] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" HandleID="k8s-pod-network.675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:42.808647 containerd[1552]: 2025-01-30 13:43:42.795 [INFO][5352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:42.808647 containerd[1552]: 2025-01-30 13:43:42.795 [INFO][5352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:42.808647 containerd[1552]: 2025-01-30 13:43:42.802 [WARNING][5352] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" HandleID="k8s-pod-network.675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:42.808647 containerd[1552]: 2025-01-30 13:43:42.802 [INFO][5352] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" HandleID="k8s-pod-network.675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:42.808647 containerd[1552]: 2025-01-30 13:43:42.803 [INFO][5352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:42.808647 containerd[1552]: 2025-01-30 13:43:42.806 [INFO][5344] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:42.808647 containerd[1552]: time="2025-01-30T13:43:42.808610709Z" level=info msg="TearDown network for sandbox \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\" successfully" Jan 30 13:43:42.808647 containerd[1552]: time="2025-01-30T13:43:42.808636780Z" level=info msg="StopPodSandbox for \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\" returns successfully" Jan 30 13:43:42.809229 containerd[1552]: time="2025-01-30T13:43:42.809181775Z" level=info msg="RemovePodSandbox for \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\"" Jan 30 13:43:42.809229 containerd[1552]: time="2025-01-30T13:43:42.809231792Z" level=info msg="Forcibly stopping sandbox \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\"" Jan 30 13:43:42.879342 containerd[1552]: 2025-01-30 13:43:42.846 [WARNING][5376] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0", GenerateName:"calico-apiserver-7db98ddb54-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f4f520e-64e4-4078-a2f8-c4c75525a5da", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db98ddb54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d6cbab1e88b50c5e2f57bffcc4f78ce9253a27ca016549af2b95893d819ea09f", Pod:"calico-apiserver-7db98ddb54-jfc2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25a42e66813", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:42.879342 containerd[1552]: 2025-01-30 13:43:42.846 [INFO][5376] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:42.879342 containerd[1552]: 2025-01-30 13:43:42.846 [INFO][5376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" iface="eth0" netns="" Jan 30 13:43:42.879342 containerd[1552]: 2025-01-30 13:43:42.846 [INFO][5376] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:42.879342 containerd[1552]: 2025-01-30 13:43:42.846 [INFO][5376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:42.879342 containerd[1552]: 2025-01-30 13:43:42.868 [INFO][5383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" HandleID="k8s-pod-network.675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:42.879342 containerd[1552]: 2025-01-30 13:43:42.868 [INFO][5383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:42.879342 containerd[1552]: 2025-01-30 13:43:42.868 [INFO][5383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:42.879342 containerd[1552]: 2025-01-30 13:43:42.873 [WARNING][5383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" HandleID="k8s-pod-network.675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:42.879342 containerd[1552]: 2025-01-30 13:43:42.873 [INFO][5383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" HandleID="k8s-pod-network.675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Workload="localhost-k8s-calico--apiserver--7db98ddb54--jfc2r-eth0" Jan 30 13:43:42.879342 containerd[1552]: 2025-01-30 13:43:42.874 [INFO][5383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:42.879342 containerd[1552]: 2025-01-30 13:43:42.877 [INFO][5376] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f" Jan 30 13:43:42.879879 containerd[1552]: time="2025-01-30T13:43:42.879369379Z" level=info msg="TearDown network for sandbox \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\" successfully" Jan 30 13:43:42.947958 containerd[1552]: time="2025-01-30T13:43:42.947906559Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:43:42.947958 containerd[1552]: time="2025-01-30T13:43:42.947990922Z" level=info msg="RemovePodSandbox \"675f234e2d27eff1f77203fd45f46f585043e3d1ff847913860f5f1a83609a3f\" returns successfully" Jan 30 13:43:42.948468 containerd[1552]: time="2025-01-30T13:43:42.948441244Z" level=info msg="StopPodSandbox for \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\"" Jan 30 13:43:43.020508 containerd[1552]: 2025-01-30 13:43:42.986 [WARNING][5405] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w9jzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72a87a81-6fc8-4427-8a91-308c02047854", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5", Pod:"csi-node-driver-w9jzb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali318c7718c17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:43.020508 containerd[1552]: 2025-01-30 13:43:42.986 [INFO][5405] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:43.020508 containerd[1552]: 2025-01-30 13:43:42.986 [INFO][5405] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" iface="eth0" netns="" Jan 30 13:43:43.020508 containerd[1552]: 2025-01-30 13:43:42.986 [INFO][5405] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:43.020508 containerd[1552]: 2025-01-30 13:43:42.986 [INFO][5405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:43.020508 containerd[1552]: 2025-01-30 13:43:43.008 [INFO][5412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" HandleID="k8s-pod-network.f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Workload="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:43.020508 containerd[1552]: 2025-01-30 13:43:43.009 [INFO][5412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:43.020508 containerd[1552]: 2025-01-30 13:43:43.009 [INFO][5412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:43.020508 containerd[1552]: 2025-01-30 13:43:43.013 [WARNING][5412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" HandleID="k8s-pod-network.f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Workload="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:43.020508 containerd[1552]: 2025-01-30 13:43:43.013 [INFO][5412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" HandleID="k8s-pod-network.f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Workload="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:43.020508 containerd[1552]: 2025-01-30 13:43:43.015 [INFO][5412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:43.020508 containerd[1552]: 2025-01-30 13:43:43.017 [INFO][5405] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:43.020919 containerd[1552]: time="2025-01-30T13:43:43.020552871Z" level=info msg="TearDown network for sandbox \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\" successfully" Jan 30 13:43:43.020919 containerd[1552]: time="2025-01-30T13:43:43.020579973Z" level=info msg="StopPodSandbox for \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\" returns successfully" Jan 30 13:43:43.021190 containerd[1552]: time="2025-01-30T13:43:43.021145377Z" level=info msg="RemovePodSandbox for \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\"" Jan 30 13:43:43.021190 containerd[1552]: time="2025-01-30T13:43:43.021188090Z" level=info msg="Forcibly stopping sandbox \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\"" Jan 30 13:43:43.088942 containerd[1552]: 2025-01-30 13:43:43.055 [WARNING][5435] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w9jzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72a87a81-6fc8-4427-8a91-308c02047854", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93ab7fd673d1bee44b9040ca7936bc43b585fd5722d7229d1ce89e52473567b5", Pod:"csi-node-driver-w9jzb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali318c7718c17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:43.088942 containerd[1552]: 2025-01-30 13:43:43.055 [INFO][5435] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:43.088942 containerd[1552]: 2025-01-30 13:43:43.055 [INFO][5435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" iface="eth0" netns="" Jan 30 13:43:43.088942 containerd[1552]: 2025-01-30 13:43:43.055 [INFO][5435] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:43.088942 containerd[1552]: 2025-01-30 13:43:43.055 [INFO][5435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:43.088942 containerd[1552]: 2025-01-30 13:43:43.077 [INFO][5442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" HandleID="k8s-pod-network.f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Workload="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:43.088942 containerd[1552]: 2025-01-30 13:43:43.077 [INFO][5442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:43.088942 containerd[1552]: 2025-01-30 13:43:43.077 [INFO][5442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:43.088942 containerd[1552]: 2025-01-30 13:43:43.082 [WARNING][5442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" HandleID="k8s-pod-network.f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Workload="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:43.088942 containerd[1552]: 2025-01-30 13:43:43.082 [INFO][5442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" HandleID="k8s-pod-network.f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Workload="localhost-k8s-csi--node--driver--w9jzb-eth0" Jan 30 13:43:43.088942 containerd[1552]: 2025-01-30 13:43:43.083 [INFO][5442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:43.088942 containerd[1552]: 2025-01-30 13:43:43.085 [INFO][5435] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82" Jan 30 13:43:43.089595 containerd[1552]: time="2025-01-30T13:43:43.088845153Z" level=info msg="TearDown network for sandbox \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\" successfully" Jan 30 13:43:43.094700 containerd[1552]: time="2025-01-30T13:43:43.094663175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:43:43.094774 containerd[1552]: time="2025-01-30T13:43:43.094732008Z" level=info msg="RemovePodSandbox \"f7259134cc71ea116f1cb0decef0689787a7c87b6806291a546c4106487b2d82\" returns successfully" Jan 30 13:43:43.095181 containerd[1552]: time="2025-01-30T13:43:43.095159194Z" level=info msg="StopPodSandbox for \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\"" Jan 30 13:43:43.158990 containerd[1552]: 2025-01-30 13:43:43.126 [WARNING][5464] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0", GenerateName:"calico-kube-controllers-55b68d756c-", Namespace:"calico-system", SelfLink:"", UID:"176ea8e5-e830-4bc9-bb74-b55fbc8b7c09", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b68d756c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635", Pod:"calico-kube-controllers-55b68d756c-dkqrp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif0cbb25be84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:43.158990 containerd[1552]: 2025-01-30 13:43:43.127 [INFO][5464] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:43.158990 containerd[1552]: 2025-01-30 13:43:43.127 [INFO][5464] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" iface="eth0" netns="" Jan 30 13:43:43.158990 containerd[1552]: 2025-01-30 13:43:43.127 [INFO][5464] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:43.158990 containerd[1552]: 2025-01-30 13:43:43.127 [INFO][5464] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:43.158990 containerd[1552]: 2025-01-30 13:43:43.147 [INFO][5471] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" HandleID="k8s-pod-network.81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Workload="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:43.158990 containerd[1552]: 2025-01-30 13:43:43.147 [INFO][5471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:43.158990 containerd[1552]: 2025-01-30 13:43:43.147 [INFO][5471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:43.158990 containerd[1552]: 2025-01-30 13:43:43.153 [WARNING][5471] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" HandleID="k8s-pod-network.81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Workload="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:43.158990 containerd[1552]: 2025-01-30 13:43:43.153 [INFO][5471] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" HandleID="k8s-pod-network.81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Workload="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:43.158990 containerd[1552]: 2025-01-30 13:43:43.154 [INFO][5471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:43.158990 containerd[1552]: 2025-01-30 13:43:43.156 [INFO][5464] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:43.159533 containerd[1552]: time="2025-01-30T13:43:43.159033381Z" level=info msg="TearDown network for sandbox \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\" successfully" Jan 30 13:43:43.159533 containerd[1552]: time="2025-01-30T13:43:43.159059831Z" level=info msg="StopPodSandbox for \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\" returns successfully" Jan 30 13:43:43.159646 containerd[1552]: time="2025-01-30T13:43:43.159617650Z" level=info msg="RemovePodSandbox for \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\"" Jan 30 13:43:43.159682 containerd[1552]: time="2025-01-30T13:43:43.159646507Z" level=info msg="Forcibly stopping sandbox \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\"" Jan 30 13:43:43.223101 containerd[1552]: 2025-01-30 13:43:43.191 [WARNING][5493] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0", GenerateName:"calico-kube-controllers-55b68d756c-", Namespace:"calico-system", SelfLink:"", UID:"176ea8e5-e830-4bc9-bb74-b55fbc8b7c09", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b68d756c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be709cf806be7b2756a333eb93cc88f8a0670527b4fc41bbc131df55111f0635", Pod:"calico-kube-controllers-55b68d756c-dkqrp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif0cbb25be84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:43.223101 containerd[1552]: 2025-01-30 13:43:43.191 [INFO][5493] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:43.223101 containerd[1552]: 2025-01-30 13:43:43.191 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" iface="eth0" netns="" Jan 30 13:43:43.223101 containerd[1552]: 2025-01-30 13:43:43.191 [INFO][5493] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:43.223101 containerd[1552]: 2025-01-30 13:43:43.191 [INFO][5493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:43.223101 containerd[1552]: 2025-01-30 13:43:43.210 [INFO][5500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" HandleID="k8s-pod-network.81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Workload="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:43.223101 containerd[1552]: 2025-01-30 13:43:43.210 [INFO][5500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:43.223101 containerd[1552]: 2025-01-30 13:43:43.210 [INFO][5500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:43.223101 containerd[1552]: 2025-01-30 13:43:43.217 [WARNING][5500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" HandleID="k8s-pod-network.81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Workload="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:43.223101 containerd[1552]: 2025-01-30 13:43:43.217 [INFO][5500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" HandleID="k8s-pod-network.81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Workload="localhost-k8s-calico--kube--controllers--55b68d756c--dkqrp-eth0" Jan 30 13:43:43.223101 containerd[1552]: 2025-01-30 13:43:43.218 [INFO][5500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:43.223101 containerd[1552]: 2025-01-30 13:43:43.220 [INFO][5493] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3" Jan 30 13:43:43.223602 containerd[1552]: time="2025-01-30T13:43:43.223154946Z" level=info msg="TearDown network for sandbox \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\" successfully" Jan 30 13:43:43.232829 containerd[1552]: time="2025-01-30T13:43:43.232774080Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:43:43.232829 containerd[1552]: time="2025-01-30T13:43:43.232840507Z" level=info msg="RemovePodSandbox \"81e4ab3c0d90ca733fe16bf0458cac06e935b79c15ae58123cea6a43f046a4d3\" returns successfully" Jan 30 13:43:43.233364 containerd[1552]: time="2025-01-30T13:43:43.233336217Z" level=info msg="StopPodSandbox for \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\"" Jan 30 13:43:43.309929 containerd[1552]: 2025-01-30 13:43:43.270 [WARNING][5523] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1b15d90a-b342-46ab-afb0-2baf9fef6c45", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169", Pod:"coredns-7db6d8ff4d-kbbt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali144e44cafc2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:43.309929 containerd[1552]: 2025-01-30 13:43:43.271 [INFO][5523] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:43.309929 containerd[1552]: 2025-01-30 13:43:43.271 [INFO][5523] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" iface="eth0" netns="" Jan 30 13:43:43.309929 containerd[1552]: 2025-01-30 13:43:43.271 [INFO][5523] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:43.309929 containerd[1552]: 2025-01-30 13:43:43.271 [INFO][5523] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:43.309929 containerd[1552]: 2025-01-30 13:43:43.296 [INFO][5530] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" HandleID="k8s-pod-network.a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Workload="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:43.309929 containerd[1552]: 2025-01-30 13:43:43.296 [INFO][5530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:43.309929 containerd[1552]: 2025-01-30 13:43:43.296 [INFO][5530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:43.309929 containerd[1552]: 2025-01-30 13:43:43.302 [WARNING][5530] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" HandleID="k8s-pod-network.a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Workload="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:43.309929 containerd[1552]: 2025-01-30 13:43:43.302 [INFO][5530] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" HandleID="k8s-pod-network.a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Workload="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:43.309929 containerd[1552]: 2025-01-30 13:43:43.304 [INFO][5530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:43.309929 containerd[1552]: 2025-01-30 13:43:43.306 [INFO][5523] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:43.310340 containerd[1552]: time="2025-01-30T13:43:43.309976283Z" level=info msg="TearDown network for sandbox \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\" successfully" Jan 30 13:43:43.310340 containerd[1552]: time="2025-01-30T13:43:43.310009828Z" level=info msg="StopPodSandbox for \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\" returns successfully" Jan 30 13:43:43.310630 containerd[1552]: time="2025-01-30T13:43:43.310610349Z" level=info msg="RemovePodSandbox for \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\"" Jan 30 13:43:43.310672 containerd[1552]: time="2025-01-30T13:43:43.310638844Z" level=info msg="Forcibly stopping sandbox \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\"" Jan 30 13:43:43.380650 containerd[1552]: 2025-01-30 13:43:43.347 [WARNING][5552] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1b15d90a-b342-46ab-afb0-2baf9fef6c45", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0695b5a4bd896536fcc1c1db98f269663a1537f08afa6efd15e84eff7fea169", Pod:"coredns-7db6d8ff4d-kbbt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali144e44cafc2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:43.380650 containerd[1552]: 2025-01-30 13:43:43.347 [INFO][5552] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:43.380650 containerd[1552]: 2025-01-30 13:43:43.348 [INFO][5552] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" iface="eth0" netns="" Jan 30 13:43:43.380650 containerd[1552]: 2025-01-30 13:43:43.348 [INFO][5552] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:43.380650 containerd[1552]: 2025-01-30 13:43:43.348 [INFO][5552] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:43.380650 containerd[1552]: 2025-01-30 13:43:43.368 [INFO][5559] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" HandleID="k8s-pod-network.a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Workload="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:43.380650 containerd[1552]: 2025-01-30 13:43:43.368 [INFO][5559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:43.380650 containerd[1552]: 2025-01-30 13:43:43.368 [INFO][5559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:43.380650 containerd[1552]: 2025-01-30 13:43:43.373 [WARNING][5559] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" HandleID="k8s-pod-network.a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Workload="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:43.380650 containerd[1552]: 2025-01-30 13:43:43.373 [INFO][5559] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" HandleID="k8s-pod-network.a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Workload="localhost-k8s-coredns--7db6d8ff4d--kbbt2-eth0" Jan 30 13:43:43.380650 containerd[1552]: 2025-01-30 13:43:43.375 [INFO][5559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:43.380650 containerd[1552]: 2025-01-30 13:43:43.377 [INFO][5552] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2" Jan 30 13:43:43.380650 containerd[1552]: time="2025-01-30T13:43:43.380600304Z" level=info msg="TearDown network for sandbox \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\" successfully" Jan 30 13:43:43.385312 containerd[1552]: time="2025-01-30T13:43:43.385252141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:43:43.385380 containerd[1552]: time="2025-01-30T13:43:43.385334801Z" level=info msg="RemovePodSandbox \"a70c4be5e0bafc9e877aced7fe4242ea759facccafdf42d79e07ae17122921c2\" returns successfully" Jan 30 13:43:43.385819 containerd[1552]: time="2025-01-30T13:43:43.385793949Z" level=info msg="StopPodSandbox for \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\"" Jan 30 13:43:43.449801 containerd[1552]: 2025-01-30 13:43:43.419 [WARNING][5582] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e194480b-6ba7-4dbb-b599-88b607e62d55", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954", Pod:"coredns-7db6d8ff4d-tlsqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali302955e2184", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:43.449801 containerd[1552]: 2025-01-30 13:43:43.419 [INFO][5582] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:43.449801 containerd[1552]: 2025-01-30 13:43:43.419 [INFO][5582] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" iface="eth0" netns="" Jan 30 13:43:43.449801 containerd[1552]: 2025-01-30 13:43:43.419 [INFO][5582] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:43.449801 containerd[1552]: 2025-01-30 13:43:43.419 [INFO][5582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:43.449801 containerd[1552]: 2025-01-30 13:43:43.438 [INFO][5589] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" HandleID="k8s-pod-network.39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Workload="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:43.449801 containerd[1552]: 2025-01-30 13:43:43.439 [INFO][5589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:43.449801 containerd[1552]: 2025-01-30 13:43:43.439 [INFO][5589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:43.449801 containerd[1552]: 2025-01-30 13:43:43.443 [WARNING][5589] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" HandleID="k8s-pod-network.39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Workload="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:43.449801 containerd[1552]: 2025-01-30 13:43:43.444 [INFO][5589] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" HandleID="k8s-pod-network.39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Workload="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:43.449801 containerd[1552]: 2025-01-30 13:43:43.445 [INFO][5589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:43.449801 containerd[1552]: 2025-01-30 13:43:43.447 [INFO][5582] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:43.450289 containerd[1552]: time="2025-01-30T13:43:43.449849746Z" level=info msg="TearDown network for sandbox \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\" successfully" Jan 30 13:43:43.450289 containerd[1552]: time="2025-01-30T13:43:43.449883752Z" level=info msg="StopPodSandbox for \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\" returns successfully" Jan 30 13:43:43.450514 containerd[1552]: time="2025-01-30T13:43:43.450474535Z" level=info msg="RemovePodSandbox for \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\"" Jan 30 13:43:43.450556 containerd[1552]: time="2025-01-30T13:43:43.450521265Z" level=info msg="Forcibly stopping sandbox \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\"" Jan 30 13:43:43.521060 containerd[1552]: 2025-01-30 13:43:43.489 [WARNING][5613] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e194480b-6ba7-4dbb-b599-88b607e62d55", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ceba88e95baa6ac0e3eb8b529308dbc776e66311dfd696365a054da07f6a954", Pod:"coredns-7db6d8ff4d-tlsqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali302955e2184", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:43:43.521060 containerd[1552]: 2025-01-30 13:43:43.489 [INFO][5613] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:43.521060 containerd[1552]: 2025-01-30 13:43:43.489 [INFO][5613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" iface="eth0" netns="" Jan 30 13:43:43.521060 containerd[1552]: 2025-01-30 13:43:43.489 [INFO][5613] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:43.521060 containerd[1552]: 2025-01-30 13:43:43.489 [INFO][5613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:43.521060 containerd[1552]: 2025-01-30 13:43:43.509 [INFO][5621] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" HandleID="k8s-pod-network.39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Workload="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:43.521060 containerd[1552]: 2025-01-30 13:43:43.510 [INFO][5621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:43:43.521060 containerd[1552]: 2025-01-30 13:43:43.510 [INFO][5621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:43:43.521060 containerd[1552]: 2025-01-30 13:43:43.514 [WARNING][5621] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" HandleID="k8s-pod-network.39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Workload="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:43.521060 containerd[1552]: 2025-01-30 13:43:43.515 [INFO][5621] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" HandleID="k8s-pod-network.39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Workload="localhost-k8s-coredns--7db6d8ff4d--tlsqp-eth0" Jan 30 13:43:43.521060 containerd[1552]: 2025-01-30 13:43:43.516 [INFO][5621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:43:43.521060 containerd[1552]: 2025-01-30 13:43:43.518 [INFO][5613] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7" Jan 30 13:43:43.521475 containerd[1552]: time="2025-01-30T13:43:43.521061194Z" level=info msg="TearDown network for sandbox \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\" successfully" Jan 30 13:43:43.525148 containerd[1552]: time="2025-01-30T13:43:43.525106317Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:43:43.525280 containerd[1552]: time="2025-01-30T13:43:43.525176624Z" level=info msg="RemovePodSandbox \"39ffd2f260cfe00265a55b68b34ae177ff916739c63702f317c6272a6c15d2f7\" returns successfully" Jan 30 13:43:47.079693 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:45342.service - OpenSSH per-connection server daemon (10.0.0.1:45342). Jan 30 13:43:47.112662 sshd[5637]: Accepted publickey for core from 10.0.0.1 port 45342 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:47.114434 sshd[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:47.119320 systemd-logind[1532]: New session 16 of user core. Jan 30 13:43:47.126787 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:43:47.252149 sshd[5637]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:47.256948 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:45342.service: Deactivated successfully. Jan 30 13:43:47.260422 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:43:47.261777 systemd-logind[1532]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:43:47.262788 systemd-logind[1532]: Removed session 16. Jan 30 13:43:50.572813 kubelet[2738]: E0130 13:43:50.572768 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:52.263769 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:39220.service - OpenSSH per-connection server daemon (10.0.0.1:39220). Jan 30 13:43:52.293068 sshd[5694]: Accepted publickey for core from 10.0.0.1 port 39220 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:52.294715 sshd[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:52.298714 systemd-logind[1532]: New session 17 of user core. Jan 30 13:43:52.307761 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:43:52.483619 sshd[5694]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:52.491092 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:39220.service: Deactivated successfully. Jan 30 13:43:52.494394 systemd-logind[1532]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:43:52.495101 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:43:52.496204 systemd-logind[1532]: Removed session 17. Jan 30 13:43:57.504749 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:48020.service - OpenSSH per-connection server daemon (10.0.0.1:48020). Jan 30 13:43:57.534273 sshd[5713]: Accepted publickey for core from 10.0.0.1 port 48020 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:57.535942 sshd[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:57.539892 systemd-logind[1532]: New session 18 of user core. Jan 30 13:43:57.550760 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:43:57.658321 sshd[5713]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:57.668748 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:48034.service - OpenSSH per-connection server daemon (10.0.0.1:48034). Jan 30 13:43:57.669340 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:48020.service: Deactivated successfully. Jan 30 13:43:57.672893 systemd-logind[1532]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:43:57.673679 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:43:57.674574 systemd-logind[1532]: Removed session 18. Jan 30 13:43:57.699387 sshd[5725]: Accepted publickey for core from 10.0.0.1 port 48034 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:57.701129 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:57.705362 systemd-logind[1532]: New session 19 of user core. Jan 30 13:43:57.710809 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:43:57.953388 sshd[5725]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:57.963717 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:48050.service - OpenSSH per-connection server daemon (10.0.0.1:48050). Jan 30 13:43:57.964435 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:48034.service: Deactivated successfully. Jan 30 13:43:57.968021 systemd-logind[1532]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:43:57.969148 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:43:57.970848 systemd-logind[1532]: Removed session 19. Jan 30 13:43:57.998788 sshd[5738]: Accepted publickey for core from 10.0.0.1 port 48050 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:58.000819 sshd[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:58.005880 systemd-logind[1532]: New session 20 of user core. Jan 30 13:43:58.013771 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:43:58.506036 kubelet[2738]: E0130 13:43:58.505998 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:59.507795 kubelet[2738]: E0130 13:43:59.507392 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:59.904235 sshd[5738]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:59.910910 systemd[1]: Started sshd@20-10.0.0.26:22-10.0.0.1:48062.service - OpenSSH per-connection server daemon (10.0.0.1:48062). Jan 30 13:43:59.911380 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:48050.service: Deactivated successfully. Jan 30 13:43:59.916948 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:43:59.917673 systemd-logind[1532]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:43:59.919003 systemd-logind[1532]: Removed session 20. Jan 30 13:43:59.944217 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 48062 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:59.945972 sshd[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:59.950817 systemd-logind[1532]: New session 21 of user core. Jan 30 13:43:59.960809 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:44:00.223719 sshd[5757]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:00.232927 systemd[1]: Started sshd@21-10.0.0.26:22-10.0.0.1:48064.service - OpenSSH per-connection server daemon (10.0.0.1:48064). Jan 30 13:44:00.233565 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:48062.service: Deactivated successfully. Jan 30 13:44:00.235747 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:44:00.237591 systemd-logind[1532]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:44:00.238962 systemd-logind[1532]: Removed session 21. Jan 30 13:44:00.265805 sshd[5773]: Accepted publickey for core from 10.0.0.1 port 48064 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:44:00.267538 sshd[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:00.272414 systemd-logind[1532]: New session 22 of user core. Jan 30 13:44:00.276838 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:44:00.452701 sshd[5773]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:00.457977 systemd[1]: sshd@21-10.0.0.26:22-10.0.0.1:48064.service: Deactivated successfully. Jan 30 13:44:00.460917 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:44:00.461817 systemd-logind[1532]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:44:00.462796 systemd-logind[1532]: Removed session 22. Jan 30 13:44:00.506062 kubelet[2738]: E0130 13:44:00.505883 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:05.464715 systemd[1]: Started sshd@22-10.0.0.26:22-10.0.0.1:48066.service - OpenSSH per-connection server daemon (10.0.0.1:48066). Jan 30 13:44:05.495375 sshd[5797]: Accepted publickey for core from 10.0.0.1 port 48066 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:44:05.497250 sshd[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:05.503377 systemd-logind[1532]: New session 23 of user core. Jan 30 13:44:05.509839 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:44:05.635230 sshd[5797]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:05.639658 systemd[1]: sshd@22-10.0.0.26:22-10.0.0.1:48066.service: Deactivated successfully. Jan 30 13:44:05.642591 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:44:05.643114 systemd-logind[1532]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:44:05.644221 systemd-logind[1532]: Removed session 23. Jan 30 13:44:10.642919 systemd[1]: Started sshd@23-10.0.0.26:22-10.0.0.1:38386.service - OpenSSH per-connection server daemon (10.0.0.1:38386). Jan 30 13:44:10.675980 sshd[5815]: Accepted publickey for core from 10.0.0.1 port 38386 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:44:10.677641 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:10.682163 systemd-logind[1532]: New session 24 of user core. Jan 30 13:44:10.687859 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:44:10.792989 sshd[5815]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:10.796232 systemd[1]: sshd@23-10.0.0.26:22-10.0.0.1:38386.service: Deactivated successfully. Jan 30 13:44:10.800130 systemd-logind[1532]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:44:10.800595 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:44:10.802746 systemd-logind[1532]: Removed session 24. Jan 30 13:44:12.505949 kubelet[2738]: E0130 13:44:12.505891 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:15.807888 systemd[1]: Started sshd@24-10.0.0.26:22-10.0.0.1:38402.service - OpenSSH per-connection server daemon (10.0.0.1:38402). Jan 30 13:44:15.849035 sshd[5834]: Accepted publickey for core from 10.0.0.1 port 38402 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:44:15.850899 sshd[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:15.855124 systemd-logind[1532]: New session 25 of user core. Jan 30 13:44:15.863963 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:44:15.984204 sshd[5834]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:15.988630 systemd[1]: sshd@24-10.0.0.26:22-10.0.0.1:38402.service: Deactivated successfully. Jan 30 13:44:15.991238 systemd-logind[1532]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:44:15.991344 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:44:15.992514 systemd-logind[1532]: Removed session 25. Jan 30 13:44:16.506442 kubelet[2738]: E0130 13:44:16.506404 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:21.002770 systemd[1]: Started sshd@25-10.0.0.26:22-10.0.0.1:46030.service - OpenSSH per-connection server daemon (10.0.0.1:46030). Jan 30 13:44:21.034307 sshd[5891]: Accepted publickey for core from 10.0.0.1 port 46030 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:44:21.035861 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:21.039790 systemd-logind[1532]: New session 26 of user core. Jan 30 13:44:21.049761 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:44:21.148981 sshd[5891]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:21.152302 systemd[1]: sshd@25-10.0.0.26:22-10.0.0.1:46030.service: Deactivated successfully. Jan 30 13:44:21.154333 systemd-logind[1532]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:44:21.154608 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:44:21.155676 systemd-logind[1532]: Removed session 26.