Jan 13 21:26:01.916160 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:26:01.916187 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:26:01.916202 kernel: BIOS-provided physical RAM map: Jan 13 21:26:01.916210 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:26:01.916218 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:26:01.916227 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:26:01.916237 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 21:26:01.916246 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 21:26:01.916254 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:26:01.916265 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 21:26:01.916274 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:26:01.916282 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:26:01.916291 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:26:01.916300 kernel: NX (Execute Disable) protection: active Jan 13 21:26:01.916311 kernel: APIC: Static calls initialized Jan 13 21:26:01.916323 kernel: SMBIOS 2.8 present. Jan 13 21:26:01.916332 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 21:26:01.916341 kernel: Hypervisor detected: KVM Jan 13 21:26:01.916351 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:26:01.916360 kernel: kvm-clock: using sched offset of 2204836666 cycles Jan 13 21:26:01.916369 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:26:01.916379 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:26:01.916389 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:26:01.916399 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:26:01.916408 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 21:26:01.916421 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:26:01.916431 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:26:01.916440 kernel: Using GB pages for direct mapping Jan 13 21:26:01.916450 kernel: ACPI: Early table checksum verification disabled Jan 13 21:26:01.916459 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 21:26:01.916469 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:26:01.916478 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:26:01.916488 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:26:01.916500 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 21:26:01.916510 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:26:01.916519 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:26:01.916528 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:26:01.916538 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:26:01.916547 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 21:26:01.916557 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 21:26:01.916571 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 21:26:01.916584 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 21:26:01.916594 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 21:26:01.916604 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 21:26:01.916614 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 21:26:01.916624 kernel: No NUMA configuration found Jan 13 21:26:01.916634 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 21:26:01.916644 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 21:26:01.916656 kernel: Zone ranges: Jan 13 21:26:01.916666 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:26:01.916676 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 21:26:01.916686 kernel: Normal empty Jan 13 21:26:01.916696 kernel: Movable zone start for each node Jan 13 21:26:01.916706 kernel: Early memory node ranges Jan 13 21:26:01.916716 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:26:01.916725 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 21:26:01.916736 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 21:26:01.916748 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:26:01.916759 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:26:01.916769 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 21:26:01.916778 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:26:01.916788 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:26:01.916799 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:26:01.916809 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:26:01.916819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:26:01.916829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:26:01.916843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:26:01.916853 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:26:01.916862 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:26:01.916873 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:26:01.916882 kernel: TSC deadline timer available Jan 13 21:26:01.916892 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:26:01.916902 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:26:01.916932 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:26:01.916943 kernel: kvm-guest: setup PV sched yield Jan 13 21:26:01.916953 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 21:26:01.916975 kernel: Booting paravirtualized kernel on KVM Jan 13 21:26:01.916986 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:26:01.916996 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:26:01.917006 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:26:01.917016 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:26:01.917026 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:26:01.917036 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:26:01.917046 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:26:01.917058 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:26:01.917073 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:26:01.917083 kernel: random: crng init done Jan 13 21:26:01.917094 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:26:01.917104 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:26:01.917114 kernel: Fallback order for Node 0: 0 Jan 13 21:26:01.917125 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 21:26:01.917135 kernel: Policy zone: DMA32 Jan 13 21:26:01.917145 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:26:01.917160 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 13 21:26:01.917170 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:26:01.917181 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:26:01.917191 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:26:01.917201 kernel: Dynamic Preempt: voluntary Jan 13 21:26:01.917211 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:26:01.917222 kernel: rcu: RCU event tracing is enabled. Jan 13 21:26:01.917233 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:26:01.917244 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:26:01.917258 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:26:01.917268 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:26:01.917278 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:26:01.917289 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:26:01.917300 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:26:01.917310 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:26:01.917321 kernel: Console: colour VGA+ 80x25 Jan 13 21:26:01.917331 kernel: printk: console [ttyS0] enabled Jan 13 21:26:01.917342 kernel: ACPI: Core revision 20230628 Jan 13 21:26:01.917355 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:26:01.917365 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:26:01.917376 kernel: x2apic enabled Jan 13 21:26:01.917386 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:26:01.917397 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:26:01.917408 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:26:01.917418 kernel: kvm-guest: setup PV IPIs Jan 13 21:26:01.917442 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:26:01.917453 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:26:01.917464 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:26:01.917475 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:26:01.917486 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:26:01.917499 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:26:01.917510 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:26:01.917521 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:26:01.917532 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:26:01.917545 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:26:01.917556 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:26:01.917567 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:26:01.917578 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:26:01.917588 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:26:01.917598 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:26:01.917609 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:26:01.917620 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:26:01.917630 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:26:01.917644 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:26:01.917655 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:26:01.917666 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:26:01.917676 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:26:01.917686 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:26:01.917697 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:26:01.917707 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:26:01.917718 kernel: landlock: Up and running. Jan 13 21:26:01.917728 kernel: SELinux: Initializing. Jan 13 21:26:01.917742 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:26:01.917752 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:26:01.917763 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:26:01.917773 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:26:01.917784 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:26:01.917795 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:26:01.917805 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:26:01.917816 kernel: ... version: 0 Jan 13 21:26:01.917829 kernel: ... bit width: 48 Jan 13 21:26:01.917839 kernel: ... generic registers: 6 Jan 13 21:26:01.917850 kernel: ... value mask: 0000ffffffffffff Jan 13 21:26:01.917860 kernel: ... max period: 00007fffffffffff Jan 13 21:26:01.917869 kernel: ... fixed-purpose events: 0 Jan 13 21:26:01.917879 kernel: ... event mask: 000000000000003f Jan 13 21:26:01.917889 kernel: signal: max sigframe size: 1776 Jan 13 21:26:01.917899 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:26:01.917910 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:26:01.918013 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:26:01.918029 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:26:01.918039 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:26:01.918049 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:26:01.918060 kernel: smpboot: Max logical packages: 1 Jan 13 21:26:01.918070 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:26:01.918080 kernel: devtmpfs: initialized Jan 13 21:26:01.918091 kernel: x86/mm: Memory block size: 128MB Jan 13 21:26:01.918101 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:26:01.918111 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:26:01.918125 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:26:01.918136 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:26:01.918146 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:26:01.918157 kernel: audit: type=2000 audit(1736803561.474:1): state=initialized audit_enabled=0 res=1 Jan 13 21:26:01.918167 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:26:01.918177 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:26:01.918187 kernel: cpuidle: using governor menu Jan 13 21:26:01.918197 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:26:01.918207 kernel: dca service started, version 1.12.1 Jan 13 21:26:01.918221 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:26:01.918231 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:26:01.918242 kernel: PCI: Using configuration type 1 for base access Jan 13 21:26:01.918252 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:26:01.918263 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:26:01.918273 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:26:01.918284 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:26:01.918294 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:26:01.918303 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:26:01.918317 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:26:01.918326 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:26:01.918337 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:26:01.918347 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:26:01.918357 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:26:01.918367 kernel: ACPI: Interpreter enabled Jan 13 21:26:01.918377 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:26:01.918388 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:26:01.918398 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:26:01.918411 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:26:01.918421 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:26:01.918432 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:26:01.918645 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:26:01.918808 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:26:01.918955 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:26:01.918975 kernel: PCI host bridge to bus 0000:00 Jan 13 21:26:01.919107 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:26:01.919221 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:26:01.919363 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:26:01.919520 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:26:01.919636 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:26:01.919744 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 21:26:01.919852 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:26:01.920060 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:26:01.920190 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:26:01.920309 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 21:26:01.920425 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 21:26:01.920541 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 21:26:01.920659 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:26:01.920786 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:26:01.920910 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 21:26:01.921053 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 21:26:01.921173 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 21:26:01.921304 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:26:01.921425 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:26:01.921544 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 21:26:01.921666 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 21:26:01.921794 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:26:01.921913 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 21:26:01.922053 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 21:26:01.922172 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 21:26:01.922312 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 21:26:01.922475 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:26:01.922634 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:26:01.922795 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:26:01.922974 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 21:26:01.923131 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 21:26:01.923296 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:26:01.923451 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 21:26:01.923466 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:26:01.923481 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:26:01.923492 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:26:01.923502 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:26:01.923513 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:26:01.923523 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:26:01.923533 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:26:01.923543 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:26:01.923554 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:26:01.923564 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:26:01.923578 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:26:01.923588 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:26:01.923599 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:26:01.923609 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:26:01.923619 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:26:01.923630 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:26:01.923640 kernel: iommu: Default domain type: Translated Jan 13 21:26:01.923651 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:26:01.923661 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:26:01.923676 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:26:01.923686 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:26:01.923696 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 21:26:01.923876 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:26:01.924062 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:26:01.924203 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:26:01.924215 kernel: vgaarb: loaded Jan 13 21:26:01.924225 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:26:01.924240 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:26:01.924250 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:26:01.924260 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:26:01.924270 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:26:01.924280 kernel: pnp: PnP ACPI init Jan 13 21:26:01.924442 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:26:01.924460 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:26:01.924471 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:26:01.924486 kernel: NET: Registered PF_INET protocol family Jan 13 21:26:01.924498 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:26:01.924509 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:26:01.924520 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:26:01.924531 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:26:01.924542 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:26:01.924552 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:26:01.924563 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:26:01.924575 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:26:01.924588 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:26:01.924599 kernel: NET: Registered PF_XDP protocol family Jan 13 21:26:01.924743 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:26:01.924887 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:26:01.925061 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:26:01.925206 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:26:01.925349 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:26:01.925492 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 21:26:01.925512 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:26:01.925524 kernel: Initialise system trusted keyrings Jan 13 21:26:01.925534 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:26:01.925545 kernel: Key type asymmetric registered Jan 13 21:26:01.925555 kernel: Asymmetric key parser 'x509' registered Jan 13 21:26:01.925565 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:26:01.925576 kernel: io scheduler mq-deadline registered Jan 13 21:26:01.925587 kernel: io scheduler kyber registered Jan 13 21:26:01.925597 kernel: io scheduler bfq registered Jan 13 21:26:01.925611 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:26:01.925622 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:26:01.925633 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:26:01.925643 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:26:01.925654 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:26:01.925664 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:26:01.925675 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:26:01.925686 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:26:01.925696 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:26:01.925863 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:26:01.925880 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:26:01.926054 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:26:01.926205 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:26:01 UTC (1736803561) Jan 13 21:26:01.926352 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:26:01.926368 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:26:01.926379 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:26:01.926389 kernel: Segment Routing with IPv6 Jan 13 21:26:01.926404 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:26:01.926415 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:26:01.926425 kernel: Key type dns_resolver registered Jan 13 21:26:01.926435 kernel: IPI shorthand broadcast: enabled Jan 13 21:26:01.926446 kernel: sched_clock: Marking stable (615002129, 104599634)->(752830616, -33228853) Jan 13 21:26:01.926456 kernel: registered taskstats version 1 Jan 13 21:26:01.926467 kernel: Loading compiled-in X.509 certificates Jan 13 21:26:01.926478 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:26:01.926488 kernel: Key type .fscrypt registered Jan 13 21:26:01.926501 kernel: Key type fscrypt-provisioning registered Jan 13 21:26:01.926512 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:26:01.926523 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:26:01.926534 kernel: ima: No architecture policies found Jan 13 21:26:01.926544 kernel: clk: Disabling unused clocks Jan 13 21:26:01.926554 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:26:01.926564 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:26:01.926575 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:26:01.926585 kernel: Run /init as init process Jan 13 21:26:01.926599 kernel: with arguments: Jan 13 21:26:01.926609 kernel: /init Jan 13 21:26:01.926619 kernel: with environment: Jan 13 21:26:01.926629 kernel: HOME=/ Jan 13 21:26:01.926639 kernel: TERM=linux Jan 13 21:26:01.926649 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:26:01.926662 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:26:01.926676 systemd[1]: Detected virtualization kvm. Jan 13 21:26:01.926691 systemd[1]: Detected architecture x86-64. Jan 13 21:26:01.926701 systemd[1]: Running in initrd. Jan 13 21:26:01.926712 systemd[1]: No hostname configured, using default hostname. Jan 13 21:26:01.926723 systemd[1]: Hostname set to . Jan 13 21:26:01.926735 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:26:01.926746 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:26:01.926757 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:26:01.926769 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:26:01.926784 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:26:01.926810 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:26:01.926824 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:26:01.926836 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:26:01.926850 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:26:01.926865 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:26:01.926877 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:26:01.926888 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:26:01.926900 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:26:01.926911 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:26:01.927071 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:26:01.927084 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:26:01.927096 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:26:01.927112 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:26:01.927124 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:26:01.927135 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:26:01.927147 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:26:01.927158 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:26:01.927170 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:26:01.927182 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:26:01.927193 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:26:01.927207 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:26:01.927222 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:26:01.927233 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:26:01.927245 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:26:01.927257 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:26:01.927268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:01.927280 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:26:01.927292 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:26:01.927303 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:26:01.927319 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:26:01.927356 systemd-journald[192]: Collecting audit messages is disabled. Jan 13 21:26:01.927387 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:26:01.927399 systemd-journald[192]: Journal started Jan 13 21:26:01.927425 systemd-journald[192]: Runtime Journal (/run/log/journal/b825481174ed449aa5d096c41c55c781) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:26:01.930943 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:26:01.931469 systemd-modules-load[194]: Inserted module 'overlay' Jan 13 21:26:01.961205 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:26:01.961224 kernel: Bridge firewalling registered Jan 13 21:26:01.960587 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 13 21:26:01.967850 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:26:01.968318 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:26:01.968833 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:01.974526 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:26:01.985092 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:26:01.985952 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:26:01.986998 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:26:01.999334 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:02.001703 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:26:02.018756 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:02.029117 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:26:02.032410 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:26:02.040113 dracut-cmdline[229]: dracut-dracut-053 Jan 13 21:26:02.043499 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:26:02.068761 systemd-resolved[233]: Positive Trust Anchors: Jan 13 21:26:02.068776 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:26:02.068816 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:26:02.071802 systemd-resolved[233]: Defaulting to hostname 'linux'. Jan 13 21:26:02.072877 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:26:02.078764 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:26:02.148973 kernel: SCSI subsystem initialized Jan 13 21:26:02.157936 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:26:02.168950 kernel: iscsi: registered transport (tcp) Jan 13 21:26:02.188942 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:26:02.188983 kernel: QLogic iSCSI HBA Driver Jan 13 21:26:02.230883 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:26:02.238067 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:26:02.261362 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:26:02.261393 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:26:02.262398 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:26:02.301973 kernel: raid6: avx2x4 gen() 29668 MB/s Jan 13 21:26:02.318945 kernel: raid6: avx2x2 gen() 29436 MB/s Jan 13 21:26:02.336061 kernel: raid6: avx2x1 gen() 23907 MB/s Jan 13 21:26:02.336094 kernel: raid6: using algorithm avx2x4 gen() 29668 MB/s Jan 13 21:26:02.354036 kernel: raid6: .... xor() 8006 MB/s, rmw enabled Jan 13 21:26:02.354063 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:26:02.374981 kernel: xor: automatically using best checksumming function avx Jan 13 21:26:02.533971 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:26:02.547505 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:26:02.560201 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:26:02.574168 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 13 21:26:02.578669 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:26:02.592074 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:26:02.606234 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Jan 13 21:26:02.639224 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:26:02.650079 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:26:02.717813 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:26:02.725386 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:26:02.738900 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:26:02.742184 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:26:02.745028 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:26:02.747871 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:26:02.759982 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:26:02.800088 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:26:02.800108 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:26:02.800119 kernel: AES CTR mode by8 optimization enabled Jan 13 21:26:02.800130 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:26:02.800272 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:26:02.800283 kernel: GPT:9289727 != 19775487 Jan 13 21:26:02.800294 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:26:02.800307 kernel: GPT:9289727 != 19775487 Jan 13 21:26:02.800317 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:26:02.800327 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:26:02.763120 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:26:02.779271 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:26:02.804781 kernel: libata version 3.00 loaded. Jan 13 21:26:02.790333 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:26:02.790524 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:02.796151 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:26:02.798090 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:26:02.798384 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:02.800761 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:02.818796 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:26:02.845135 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:26:02.845152 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:26:02.845344 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:26:02.845524 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (458) Jan 13 21:26:02.845542 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (470) Jan 13 21:26:02.845563 kernel: scsi host0: ahci Jan 13 21:26:02.845748 kernel: scsi host1: ahci Jan 13 21:26:02.845952 kernel: scsi host2: ahci Jan 13 21:26:02.846710 kernel: scsi host3: ahci Jan 13 21:26:02.846860 kernel: scsi host4: ahci Jan 13 21:26:02.847023 kernel: scsi host5: ahci Jan 13 21:26:02.847179 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 21:26:02.847191 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 21:26:02.847202 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 21:26:02.847213 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 21:26:02.847223 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 21:26:02.847233 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 21:26:02.810382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:02.845168 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:26:02.876004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:02.882490 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:26:02.890781 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:26:02.891201 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:26:02.896001 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:26:02.910070 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:26:02.911280 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:26:02.926248 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:02.986174 disk-uuid[564]: Primary Header is updated. Jan 13 21:26:02.986174 disk-uuid[564]: Secondary Entries is updated. Jan 13 21:26:02.986174 disk-uuid[564]: Secondary Header is updated. Jan 13 21:26:02.989954 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:26:02.993944 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:26:03.152967 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:26:03.153130 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:26:03.155161 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:26:03.155947 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:26:03.156956 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:26:03.156979 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:26:03.158249 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:26:03.158270 kernel: ata3.00: applying bridge limits Jan 13 21:26:03.159950 kernel: ata3.00: configured for UDMA/100 Jan 13 21:26:03.161948 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:26:03.204474 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:26:03.220540 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:26:03.220570 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:26:03.994960 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:26:03.995442 disk-uuid[574]: The operation has completed successfully. Jan 13 21:26:04.022032 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:26:04.022171 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:26:04.052055 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:26:04.054993 sh[590]: Success Jan 13 21:26:04.066943 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:26:04.097047 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:26:04.114689 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:26:04.118555 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:26:04.129334 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:26:04.129374 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:04.129388 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:26:04.130372 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:26:04.131126 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:26:04.135860 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:26:04.136968 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:26:04.144072 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:26:04.145831 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:26:04.155480 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:04.155531 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:04.155545 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:26:04.157968 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:26:04.166711 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:26:04.168508 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:04.177469 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:26:04.186113 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:26:04.273560 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:26:04.282088 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:26:04.304323 systemd-networkd[772]: lo: Link UP Jan 13 21:26:04.304331 systemd-networkd[772]: lo: Gained carrier Jan 13 21:26:04.305884 systemd-networkd[772]: Enumeration completed Jan 13 21:26:04.305996 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:26:04.306387 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:26:04.306391 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:26:04.307212 systemd-networkd[772]: eth0: Link UP Jan 13 21:26:04.307216 systemd-networkd[772]: eth0: Gained carrier Jan 13 21:26:04.307222 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:26:04.308169 systemd[1]: Reached target network.target - Network. Jan 13 21:26:04.323972 systemd-networkd[772]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:26:04.429368 ignition[684]: Ignition 2.19.0 Jan 13 21:26:04.429380 ignition[684]: Stage: fetch-offline Jan 13 21:26:04.429420 ignition[684]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:04.429429 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:26:04.429557 ignition[684]: parsed url from cmdline: "" Jan 13 21:26:04.429561 ignition[684]: no config URL provided Jan 13 21:26:04.429567 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:26:04.429575 ignition[684]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:26:04.429601 ignition[684]: op(1): [started] loading QEMU firmware config module Jan 13 21:26:04.429607 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:26:04.440684 ignition[684]: op(1): [finished] loading QEMU firmware config module Jan 13 21:26:04.479198 ignition[684]: parsing config with SHA512: 2481fdf5abdb43a6e7953de03406452662198129533402e9d943582e24341481bdd6a2a109fa93a389dc452a384e6a1586ce4d54dceea3f86b5ac811942b89df Jan 13 21:26:04.486908 unknown[684]: fetched base config from "system" Jan 13 21:26:04.487635 ignition[684]: fetch-offline: fetch-offline passed Jan 13 21:26:04.486944 unknown[684]: fetched user config from "qemu" Jan 13 21:26:04.488154 ignition[684]: Ignition finished successfully Jan 13 21:26:04.490398 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:26:04.492413 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:26:04.507093 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:26:04.558641 ignition[783]: Ignition 2.19.0 Jan 13 21:26:04.558651 ignition[783]: Stage: kargs Jan 13 21:26:04.558870 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:04.558880 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:26:04.559810 ignition[783]: kargs: kargs passed Jan 13 21:26:04.559853 ignition[783]: Ignition finished successfully Jan 13 21:26:04.562966 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:26:04.576067 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:26:04.588215 ignition[791]: Ignition 2.19.0 Jan 13 21:26:04.588225 ignition[791]: Stage: disks Jan 13 21:26:04.588371 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:04.588381 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:26:04.591070 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:26:04.589146 ignition[791]: disks: disks passed Jan 13 21:26:04.593170 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:26:04.589190 ignition[791]: Ignition finished successfully Jan 13 21:26:04.595382 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:26:04.596802 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:26:04.597237 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:26:04.597596 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:26:04.607050 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:26:04.619673 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:26:04.625955 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:26:04.633018 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:26:04.738943 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:26:04.739319 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:26:04.740221 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:26:04.791151 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:26:04.792742 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:26:04.793989 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:26:04.794030 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:26:04.794055 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:26:04.801990 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:26:04.804381 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:26:04.814005 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (809) Jan 13 21:26:04.816265 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:04.816306 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:04.816320 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:26:04.820939 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:26:04.822454 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:26:04.845686 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:26:04.849897 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:26:04.853873 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:26:04.858626 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:26:04.950000 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:26:04.975021 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:26:04.976157 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:26:04.983939 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:05.006493 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:26:05.017273 ignition[922]: INFO : Ignition 2.19.0 Jan 13 21:26:05.017273 ignition[922]: INFO : Stage: mount Jan 13 21:26:05.019722 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:05.019722 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:26:05.019722 ignition[922]: INFO : mount: mount passed Jan 13 21:26:05.019722 ignition[922]: INFO : Ignition finished successfully Jan 13 21:26:05.024869 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:26:05.037014 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:26:05.128727 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:26:05.138171 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:26:05.145907 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (935) Jan 13 21:26:05.145957 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:05.145972 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:05.147404 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:26:05.149941 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:26:05.151391 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:26:05.172022 ignition[952]: INFO : Ignition 2.19.0 Jan 13 21:26:05.172022 ignition[952]: INFO : Stage: files Jan 13 21:26:05.173847 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:05.173847 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:26:05.173847 ignition[952]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:26:05.177697 ignition[952]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:26:05.177697 ignition[952]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:26:05.181509 ignition[952]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:26:05.182982 ignition[952]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:26:05.184537 unknown[952]: wrote ssh authorized keys file for user: core Jan 13 21:26:05.185619 ignition[952]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:26:05.187199 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:26:05.189371 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:26:05.223231 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:26:05.363796 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:26:05.363796 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:26:05.368257 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:26:05.368257 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:26:05.368257 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:26:05.368257 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:26:05.368257 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:26:05.368257 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:26:05.368257 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:26:05.368257 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:26:05.368257 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:26:05.368257 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:26:05.368257 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:26:05.368257 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:26:05.368257 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:26:05.848341 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:26:06.208958 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:26:06.208958 ignition[952]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:26:06.212901 ignition[952]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:26:06.212901 ignition[952]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:26:06.212901 ignition[952]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:26:06.212901 ignition[952]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 21:26:06.212901 ignition[952]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:26:06.212901 ignition[952]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:26:06.212901 ignition[952]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 21:26:06.212901 ignition[952]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:26:06.236901 ignition[952]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:26:06.241981 ignition[952]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:26:06.243553 ignition[952]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:26:06.243553 ignition[952]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:26:06.243553 ignition[952]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:26:06.243553 ignition[952]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:26:06.243553 ignition[952]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:26:06.243553 ignition[952]: INFO : files: files passed Jan 13 21:26:06.243553 ignition[952]: INFO : Ignition finished successfully Jan 13 21:26:06.245560 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:26:06.254168 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:26:06.257288 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:26:06.259022 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:26:06.259139 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:26:06.266747 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:26:06.269419 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:26:06.269419 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:26:06.272971 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:26:06.274294 systemd-networkd[772]: eth0: Gained IPv6LL Jan 13 21:26:06.275758 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:26:06.278772 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:26:06.285110 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:26:06.311468 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:26:06.311633 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:26:06.314694 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:26:06.316599 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:26:06.318777 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:26:06.319720 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:26:06.338119 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:26:06.340883 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:26:06.354549 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:26:06.356935 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:26:06.357435 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:26:06.357734 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:26:06.357874 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:26:06.358564 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:26:06.358895 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:26:06.359226 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:26:06.359554 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:26:06.359877 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:26:06.360213 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:26:06.360529 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:26:06.360871 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:26:06.361216 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:26:06.361527 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:26:06.361824 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:26:06.405367 ignition[1007]: INFO : Ignition 2.19.0 Jan 13 21:26:06.405367 ignition[1007]: INFO : Stage: umount Jan 13 21:26:06.405367 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:06.405367 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:26:06.361952 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:26:06.419848 ignition[1007]: INFO : umount: umount passed Jan 13 21:26:06.419848 ignition[1007]: INFO : Ignition finished successfully Jan 13 21:26:06.362511 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:26:06.362848 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:26:06.363149 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:26:06.363268 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:26:06.363646 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:26:06.363754 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:26:06.364294 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:26:06.364401 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:26:06.364876 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:26:06.365270 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:26:06.369009 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:26:06.369419 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:26:06.369829 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:26:06.370179 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:26:06.370279 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:26:06.370678 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:26:06.370767 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:26:06.371355 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:26:06.371474 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:26:06.371832 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:26:06.371955 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:26:06.390083 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:26:06.392139 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:26:06.392259 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:26:06.395591 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:26:06.396613 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:26:06.396805 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:26:06.400074 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:26:06.400180 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:26:06.412227 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:26:06.412343 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:26:06.415716 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:26:06.415827 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:26:06.418445 systemd[1]: Stopped target network.target - Network. Jan 13 21:26:06.419810 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:26:06.419892 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:26:06.421750 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:26:06.421798 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:26:06.423713 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:26:06.423758 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:26:06.425937 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:26:06.425987 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:26:06.427317 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:26:06.429645 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:26:06.430966 systemd-networkd[772]: eth0: DHCPv6 lease lost Jan 13 21:26:06.434483 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:26:06.434965 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:26:06.435092 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:26:06.436556 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:26:06.436597 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:26:06.443035 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:26:06.443907 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:26:06.443974 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:26:06.446593 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:26:06.450377 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:26:06.450494 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:26:06.462168 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:26:06.462237 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:06.463854 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:26:06.463910 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:26:06.466048 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:26:06.466093 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:26:06.467632 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:26:06.467800 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:26:06.469902 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:26:06.470022 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:26:06.472494 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:26:06.472571 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:26:06.473716 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:26:06.473756 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:26:06.475677 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:26:06.475727 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:26:06.477898 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:26:06.477957 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:26:06.479752 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:26:06.479797 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:06.493070 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:26:06.494675 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:26:06.494731 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:26:06.497174 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:26:06.497222 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:06.502792 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:26:06.502967 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:26:06.614278 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:26:06.615407 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:26:06.617904 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:26:06.620165 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:26:06.621252 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:26:06.634069 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:26:06.641658 systemd[1]: Switching root. Jan 13 21:26:06.677386 systemd-journald[192]: Journal stopped Jan 13 21:26:07.853019 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 13 21:26:07.853119 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:26:07.853142 kernel: SELinux: policy capability open_perms=1 Jan 13 21:26:07.853158 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:26:07.853174 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:26:07.853189 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:26:07.853205 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:26:07.853220 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:26:07.853236 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:26:07.853251 kernel: audit: type=1403 audit(1736803567.080:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:26:07.853271 systemd[1]: Successfully loaded SELinux policy in 39.817ms. Jan 13 21:26:07.853290 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.418ms. Jan 13 21:26:07.853313 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:26:07.853330 systemd[1]: Detected virtualization kvm. Jan 13 21:26:07.853346 systemd[1]: Detected architecture x86-64. Jan 13 21:26:07.853362 systemd[1]: Detected first boot. Jan 13 21:26:07.853378 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:26:07.853396 zram_generator::config[1051]: No configuration found. Jan 13 21:26:07.853417 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:26:07.853433 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:26:07.853456 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:26:07.853472 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:26:07.853489 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:26:07.853506 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:26:07.853522 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:26:07.853539 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:26:07.853558 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:26:07.853576 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:26:07.853592 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:26:07.853609 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:26:07.853626 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:26:07.853642 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:26:07.853658 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:26:07.853681 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:26:07.853698 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:26:07.853717 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:26:07.853733 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:26:07.853750 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:26:07.853768 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:26:07.853785 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:26:07.853810 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:26:07.853834 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:26:07.853851 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:26:07.853871 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:26:07.853887 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:26:07.853903 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:26:07.853933 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:26:07.853950 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:26:07.853967 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:26:07.853984 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:26:07.854000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:26:07.854017 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:26:07.854039 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:26:07.854055 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:26:07.854072 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:26:07.854088 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:07.854105 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:26:07.854121 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:26:07.854137 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:26:07.854156 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:26:07.854173 systemd[1]: Reached target machines.target - Containers. Jan 13 21:26:07.854192 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:26:07.854209 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:26:07.854226 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:26:07.854242 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:26:07.854259 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:26:07.854275 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:26:07.854293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:26:07.854309 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:26:07.854328 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:26:07.854346 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:26:07.854362 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:26:07.854378 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:26:07.854395 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:26:07.854411 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:26:07.854427 kernel: loop: module loaded Jan 13 21:26:07.854442 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:26:07.854459 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:26:07.854479 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:26:07.854496 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:26:07.854513 kernel: ACPI: bus type drm_connector registered Jan 13 21:26:07.854528 kernel: fuse: init (API version 7.39) Jan 13 21:26:07.854544 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:26:07.854560 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:26:07.854577 systemd[1]: Stopped verity-setup.service. Jan 13 21:26:07.854594 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:07.854631 systemd-journald[1121]: Collecting audit messages is disabled. Jan 13 21:26:07.854663 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:26:07.854679 systemd-journald[1121]: Journal started Jan 13 21:26:07.854708 systemd-journald[1121]: Runtime Journal (/run/log/journal/b825481174ed449aa5d096c41c55c781) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:26:07.604317 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:26:07.623858 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:26:07.624305 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:26:07.857568 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:26:07.858889 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:26:07.860731 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:26:07.862245 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:26:07.863872 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:26:07.865542 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:26:07.867251 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:26:07.869086 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:26:07.871194 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:26:07.871425 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:26:07.873365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:26:07.873603 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:26:07.875612 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:26:07.875860 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:26:07.877569 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:26:07.877768 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:26:07.879644 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:26:07.879873 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:26:07.881658 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:26:07.881904 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:26:07.884029 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:26:07.885913 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:26:07.888242 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:26:07.911239 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:26:07.926049 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:26:07.929161 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:26:07.930679 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:26:07.930723 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:26:07.933524 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:26:07.936637 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:26:07.939755 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:26:07.941481 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:26:07.943392 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:26:07.946178 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:26:07.948085 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:26:07.955081 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:26:07.957841 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:26:07.959861 systemd-journald[1121]: Time spent on flushing to /var/log/journal/b825481174ed449aa5d096c41c55c781 is 20.627ms for 947 entries. Jan 13 21:26:07.959861 systemd-journald[1121]: System Journal (/var/log/journal/b825481174ed449aa5d096c41c55c781) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:26:08.001693 systemd-journald[1121]: Received client request to flush runtime journal. Jan 13 21:26:08.001747 kernel: loop0: detected capacity change from 0 to 142488 Jan 13 21:26:07.967714 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:26:07.973887 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:26:07.980111 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:26:07.984288 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:26:07.987388 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:26:07.989067 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:26:07.992298 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:26:07.995229 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:26:08.000024 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:26:08.007324 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:26:08.012000 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:26:08.014332 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:26:08.023874 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:08.035940 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:26:08.045355 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:26:08.046964 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:26:08.049163 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:26:08.062563 kernel: loop1: detected capacity change from 0 to 211296 Jan 13 21:26:08.060541 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:26:08.062839 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:26:08.128961 kernel: loop2: detected capacity change from 0 to 140768 Jan 13 21:26:08.136913 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 13 21:26:08.136955 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 13 21:26:08.151484 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:26:08.176965 kernel: loop3: detected capacity change from 0 to 142488 Jan 13 21:26:08.190948 kernel: loop4: detected capacity change from 0 to 211296 Jan 13 21:26:08.200970 kernel: loop5: detected capacity change from 0 to 140768 Jan 13 21:26:08.211726 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:26:08.213333 (sd-merge)[1190]: Merged extensions into '/usr'. Jan 13 21:26:08.217990 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:26:08.218115 systemd[1]: Reloading... Jan 13 21:26:08.282946 zram_generator::config[1212]: No configuration found. Jan 13 21:26:08.465976 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:26:08.466937 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:26:08.520247 systemd[1]: Reloading finished in 301 ms. Jan 13 21:26:08.570341 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:26:08.572721 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:26:08.616184 systemd[1]: Starting ensure-sysext.service... Jan 13 21:26:08.618764 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:26:08.627435 systemd[1]: Reloading requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:26:08.627452 systemd[1]: Reloading... Jan 13 21:26:08.664864 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:26:08.665437 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:26:08.667667 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:26:08.668273 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jan 13 21:26:08.668461 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jan 13 21:26:08.674940 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:26:08.675146 systemd-tmpfiles[1254]: Skipping /boot Jan 13 21:26:08.714860 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:26:08.715485 systemd-tmpfiles[1254]: Skipping /boot Jan 13 21:26:08.746950 zram_generator::config[1283]: No configuration found. Jan 13 21:26:08.870553 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:26:08.938560 systemd[1]: Reloading finished in 310 ms. Jan 13 21:26:08.958740 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:26:08.960766 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:26:08.978752 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:26:08.981526 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:26:08.984235 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:26:08.988199 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:26:08.994182 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:26:08.999207 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:26:09.007214 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:26:09.012280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:09.012569 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:26:09.014611 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:26:09.017090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:26:09.022312 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:26:09.024110 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:26:09.024271 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:09.025832 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:26:09.026414 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:26:09.035192 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:09.035354 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:26:09.036755 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jan 13 21:26:09.037195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:26:09.041216 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:26:09.041337 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:09.042222 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:26:09.044166 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:26:09.046208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:26:09.046382 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:26:09.048018 augenrules[1346]: No rules Jan 13 21:26:09.048201 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:26:09.048377 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:26:09.059124 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:26:09.061109 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:26:09.063553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:26:09.063874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:26:09.072388 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:09.072729 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:26:09.084174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:26:09.089181 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:26:09.094144 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:26:09.098400 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:26:09.100137 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:26:09.104270 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:26:09.107042 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:09.107977 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:26:09.110030 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:26:09.112121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:26:09.112393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:26:09.114063 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:26:09.114335 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:26:09.116270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:26:09.116459 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:26:09.118515 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:26:09.118711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:26:09.125843 systemd[1]: Finished ensure-sysext.service. Jan 13 21:26:09.131388 systemd-resolved[1323]: Positive Trust Anchors: Jan 13 21:26:09.131410 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:26:09.131442 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:26:09.144974 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1369) Jan 13 21:26:09.138472 systemd-resolved[1323]: Defaulting to hostname 'linux'. Jan 13 21:26:09.143765 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:26:09.147020 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:26:09.151877 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:26:09.187667 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:26:09.188962 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:26:09.189041 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:26:09.191161 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:26:09.194620 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:26:09.246398 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:26:09.275023 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:26:09.293841 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:26:09.300956 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:26:09.304970 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:26:09.322156 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:26:09.324130 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:26:09.324360 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:26:09.324607 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:26:09.338509 systemd-networkd[1394]: lo: Link UP Jan 13 21:26:09.338520 systemd-networkd[1394]: lo: Gained carrier Jan 13 21:26:09.340158 systemd-networkd[1394]: Enumeration completed Jan 13 21:26:09.340270 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:26:09.340597 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:26:09.340601 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:26:09.340701 systemd[1]: Reached target network.target - Network. Jan 13 21:26:09.341399 systemd-networkd[1394]: eth0: Link UP Jan 13 21:26:09.341403 systemd-networkd[1394]: eth0: Gained carrier Jan 13 21:26:09.341415 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:26:09.349176 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:26:09.354398 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:26:09.355610 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:26:09.357251 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:26:09.359806 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:26:09.361081 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Jan 13 21:26:09.973582 systemd-timesyncd[1396]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:26:09.973632 systemd-timesyncd[1396]: Initial clock synchronization to Mon 2025-01-13 21:26:09.973389 UTC. Jan 13 21:26:09.974606 systemd-resolved[1323]: Clock change detected. Flushing caches. Jan 13 21:26:09.992718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:10.064466 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:26:10.079526 kernel: kvm_amd: TSC scaling supported Jan 13 21:26:10.079629 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:26:10.079648 kernel: kvm_amd: Nested Paging enabled Jan 13 21:26:10.079664 kernel: kvm_amd: LBR virtualization supported Jan 13 21:26:10.080793 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:26:10.080821 kernel: kvm_amd: Virtual GIF supported Jan 13 21:26:10.103313 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:26:10.142809 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:26:10.154930 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:10.170760 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:26:10.183902 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:26:10.229570 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:26:10.231294 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:26:10.232469 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:26:10.233670 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:26:10.234953 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:26:10.236537 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:26:10.237843 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:26:10.239142 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:26:10.240678 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:26:10.240712 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:26:10.241655 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:26:10.243410 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:26:10.246444 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:26:10.260098 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:26:10.263404 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:26:10.265253 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:26:10.266519 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:26:10.267598 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:26:10.268888 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:26:10.268923 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:26:10.270327 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:26:10.273120 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:26:10.277882 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:26:10.282574 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:26:10.285768 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:26:10.286087 jq[1426]: false Jan 13 21:26:10.286752 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:26:10.290007 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:26:10.297482 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:26:10.303464 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:26:10.309131 extend-filesystems[1427]: Found loop3 Jan 13 21:26:10.309131 extend-filesystems[1427]: Found loop4 Jan 13 21:26:10.309131 extend-filesystems[1427]: Found loop5 Jan 13 21:26:10.309131 extend-filesystems[1427]: Found sr0 Jan 13 21:26:10.309131 extend-filesystems[1427]: Found vda Jan 13 21:26:10.309131 extend-filesystems[1427]: Found vda1 Jan 13 21:26:10.309131 extend-filesystems[1427]: Found vda2 Jan 13 21:26:10.309131 extend-filesystems[1427]: Found vda3 Jan 13 21:26:10.309131 extend-filesystems[1427]: Found usr Jan 13 21:26:10.309131 extend-filesystems[1427]: Found vda4 Jan 13 21:26:10.309131 extend-filesystems[1427]: Found vda6 Jan 13 21:26:10.364218 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:26:10.364248 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1366) Jan 13 21:26:10.306160 dbus-daemon[1425]: [system] SELinux support is enabled Jan 13 21:26:10.311483 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:26:10.364663 extend-filesystems[1427]: Found vda7 Jan 13 21:26:10.364663 extend-filesystems[1427]: Found vda9 Jan 13 21:26:10.364663 extend-filesystems[1427]: Checking size of /dev/vda9 Jan 13 21:26:10.364663 extend-filesystems[1427]: Resized partition /dev/vda9 Jan 13 21:26:10.320494 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:26:10.376495 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:26:10.322815 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:26:10.379143 update_engine[1441]: I20250113 21:26:10.349833 1441 main.cc:92] Flatcar Update Engine starting Jan 13 21:26:10.379143 update_engine[1441]: I20250113 21:26:10.351222 1441 update_check_scheduler.cc:74] Next update check in 11m52s Jan 13 21:26:10.325949 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:26:10.379549 jq[1443]: true Jan 13 21:26:10.327667 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:26:10.332990 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:26:10.337720 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:26:10.343090 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:26:10.343417 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:26:10.343844 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:26:10.344094 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:26:10.357336 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:26:10.363602 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:26:10.363871 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:26:10.381287 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:26:10.387836 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:26:10.413316 jq[1451]: true Jan 13 21:26:10.413510 extend-filesystems[1448]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:26:10.413510 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:26:10.413510 extend-filesystems[1448]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:26:10.408645 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:26:10.417569 extend-filesystems[1427]: Resized filesystem in /dev/vda9 Jan 13 21:26:10.409966 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:26:10.431291 tar[1449]: linux-amd64/helm Jan 13 21:26:10.440754 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:26:10.442332 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:26:10.442358 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:26:10.443859 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:26:10.443882 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:26:10.453514 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:26:10.521175 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:26:10.521206 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:26:10.523510 systemd-logind[1437]: New seat seat0. Jan 13 21:26:10.525735 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:26:10.533651 bash[1481]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:26:10.536033 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:26:10.539593 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:26:10.611307 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:26:10.896726 containerd[1453]: time="2025-01-13T21:26:10.895251566Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:26:10.908856 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:26:10.976674 containerd[1453]: time="2025-01-13T21:26:10.976592800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:10.978784 containerd[1453]: time="2025-01-13T21:26:10.978728185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:26:10.978952 containerd[1453]: time="2025-01-13T21:26:10.978930524Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:26:10.979034 containerd[1453]: time="2025-01-13T21:26:10.979013660Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:26:10.979334 containerd[1453]: time="2025-01-13T21:26:10.979315086Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:26:10.979401 containerd[1453]: time="2025-01-13T21:26:10.979387592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:10.979537 containerd[1453]: time="2025-01-13T21:26:10.979519279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:26:10.979590 containerd[1453]: time="2025-01-13T21:26:10.979578289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:10.979855 containerd[1453]: time="2025-01-13T21:26:10.979829350Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:26:10.980838 containerd[1453]: time="2025-01-13T21:26:10.979911073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:10.980838 containerd[1453]: time="2025-01-13T21:26:10.979935128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:26:10.980838 containerd[1453]: time="2025-01-13T21:26:10.979945728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:10.980838 containerd[1453]: time="2025-01-13T21:26:10.980053150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:10.980838 containerd[1453]: time="2025-01-13T21:26:10.980373621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:10.980838 containerd[1453]: time="2025-01-13T21:26:10.980493536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:26:10.980838 containerd[1453]: time="2025-01-13T21:26:10.980506620Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:26:10.980838 containerd[1453]: time="2025-01-13T21:26:10.980617698Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:26:10.980838 containerd[1453]: time="2025-01-13T21:26:10.980671750Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:26:10.985449 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:26:10.990310 containerd[1453]: time="2025-01-13T21:26:10.990245734Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:26:10.990387 containerd[1453]: time="2025-01-13T21:26:10.990363205Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:26:10.990412 containerd[1453]: time="2025-01-13T21:26:10.990396798Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:26:10.990519 containerd[1453]: time="2025-01-13T21:26:10.990425472Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:26:10.990519 containerd[1453]: time="2025-01-13T21:26:10.990473652Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:26:10.990710 containerd[1453]: time="2025-01-13T21:26:10.990687653Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:26:10.991129 containerd[1453]: time="2025-01-13T21:26:10.991105557Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:26:10.991312 containerd[1453]: time="2025-01-13T21:26:10.991288871Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:26:10.991349 containerd[1453]: time="2025-01-13T21:26:10.991315150Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:26:10.991349 containerd[1453]: time="2025-01-13T21:26:10.991336911Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:26:10.991385 containerd[1453]: time="2025-01-13T21:26:10.991356718Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:26:10.991385 containerd[1453]: time="2025-01-13T21:26:10.991374752Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:26:10.991430 containerd[1453]: time="2025-01-13T21:26:10.991393617Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:26:10.991430 containerd[1453]: time="2025-01-13T21:26:10.991412533Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:26:10.991466 containerd[1453]: time="2025-01-13T21:26:10.991433883Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:26:10.991466 containerd[1453]: time="2025-01-13T21:26:10.991454822Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:26:10.991501 containerd[1453]: time="2025-01-13T21:26:10.991472054Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:26:10.991501 containerd[1453]: time="2025-01-13T21:26:10.991491581Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:26:10.991535 containerd[1453]: time="2025-01-13T21:26:10.991518772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991560 containerd[1453]: time="2025-01-13T21:26:10.991538469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991589 containerd[1453]: time="2025-01-13T21:26:10.991561572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991613 containerd[1453]: time="2025-01-13T21:26:10.991586789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991613 containerd[1453]: time="2025-01-13T21:26:10.991606997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991656 containerd[1453]: time="2025-01-13T21:26:10.991627396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991656 containerd[1453]: time="2025-01-13T21:26:10.991648806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991697 containerd[1453]: time="2025-01-13T21:26:10.991668863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991716 containerd[1453]: time="2025-01-13T21:26:10.991687689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991736 containerd[1453]: time="2025-01-13T21:26:10.991721913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991755 containerd[1453]: time="2025-01-13T21:26:10.991740357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991782 containerd[1453]: time="2025-01-13T21:26:10.991761587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991802 containerd[1453]: time="2025-01-13T21:26:10.991781004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991844 containerd[1453]: time="2025-01-13T21:26:10.991821600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:26:10.991887 containerd[1453]: time="2025-01-13T21:26:10.991861304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991923 containerd[1453]: time="2025-01-13T21:26:10.991902662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.991945 containerd[1453]: time="2025-01-13T21:26:10.991922649Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:26:10.992097 containerd[1453]: time="2025-01-13T21:26:10.991982712Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:26:10.992097 containerd[1453]: time="2025-01-13T21:26:10.992018659Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:26:10.992097 containerd[1453]: time="2025-01-13T21:26:10.992033427Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:26:10.992097 containerd[1453]: time="2025-01-13T21:26:10.992049257Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:26:10.992097 containerd[1453]: time="2025-01-13T21:26:10.992061229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.992097 containerd[1453]: time="2025-01-13T21:26:10.992077169Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:26:10.992219 containerd[1453]: time="2025-01-13T21:26:10.992104901Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:26:10.992219 containerd[1453]: time="2025-01-13T21:26:10.992126341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:26:10.992625 containerd[1453]: time="2025-01-13T21:26:10.992551118Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:26:10.992806 containerd[1453]: time="2025-01-13T21:26:10.992636007Z" level=info msg="Connect containerd service" Jan 13 21:26:10.992806 containerd[1453]: time="2025-01-13T21:26:10.992698955Z" level=info msg="using legacy CRI server" Jan 13 21:26:10.992806 containerd[1453]: time="2025-01-13T21:26:10.992710567Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:26:10.992895 containerd[1453]: time="2025-01-13T21:26:10.992850309Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:26:10.993992 containerd[1453]: time="2025-01-13T21:26:10.993964849Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:26:10.994163 containerd[1453]: time="2025-01-13T21:26:10.994112226Z" level=info msg="Start subscribing containerd event" Jan 13 21:26:10.994202 containerd[1453]: time="2025-01-13T21:26:10.994192716Z" level=info msg="Start recovering state" Jan 13 21:26:10.994320 containerd[1453]: time="2025-01-13T21:26:10.994292233Z" level=info msg="Start event monitor" Jan 13 21:26:10.994349 containerd[1453]: time="2025-01-13T21:26:10.994319214Z" level=info msg="Start snapshots syncer" Jan 13 21:26:10.994349 containerd[1453]: time="2025-01-13T21:26:10.994334863Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:26:10.994418 containerd[1453]: time="2025-01-13T21:26:10.994347677Z" level=info msg="Start streaming server" Jan 13 21:26:10.994966 containerd[1453]: time="2025-01-13T21:26:10.994942853Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:26:10.995038 containerd[1453]: time="2025-01-13T21:26:10.995017704Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:26:10.995214 containerd[1453]: time="2025-01-13T21:26:10.995190007Z" level=info msg="containerd successfully booted in 0.102591s" Jan 13 21:26:10.997283 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:26:10.999489 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:26:11.006638 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:26:11.006914 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:26:11.010026 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:26:11.051096 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:26:11.062726 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:26:11.065553 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:26:11.066990 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:26:11.103049 tar[1449]: linux-amd64/LICENSE Jan 13 21:26:11.103049 tar[1449]: linux-amd64/README.md Jan 13 21:26:11.119091 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:26:11.747604 systemd-networkd[1394]: eth0: Gained IPv6LL Jan 13 21:26:11.751604 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:26:11.753460 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:26:11.768473 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:26:11.770953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:26:11.773116 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:26:11.792024 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:26:11.792418 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:26:11.794034 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:26:11.795637 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:26:12.818370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:26:12.820309 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:26:12.821723 systemd[1]: Startup finished in 749ms (kernel) + 5.386s (initrd) + 5.170s (userspace) = 11.306s. Jan 13 21:26:12.850675 (kubelet)[1538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:26:13.603606 kubelet[1538]: E0113 21:26:13.603511 1538 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:26:13.608572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:26:13.608779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:26:13.609169 systemd[1]: kubelet.service: Consumed 1.755s CPU time. Jan 13 21:26:20.190579 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:26:20.192191 systemd[1]: Started sshd@0-10.0.0.116:22-10.0.0.1:56684.service - OpenSSH per-connection server daemon (10.0.0.1:56684). Jan 13 21:26:20.236351 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 56684 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:20.238566 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:20.247943 systemd-logind[1437]: New session 1 of user core. Jan 13 21:26:20.249250 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:26:20.263580 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:26:20.278388 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:26:20.281571 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:26:20.291244 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:26:20.406688 systemd[1556]: Queued start job for default target default.target. Jan 13 21:26:20.417891 systemd[1556]: Created slice app.slice - User Application Slice. Jan 13 21:26:20.417918 systemd[1556]: Reached target paths.target - Paths. Jan 13 21:26:20.417932 systemd[1556]: Reached target timers.target - Timers. Jan 13 21:26:20.419555 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:26:20.432020 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:26:20.432151 systemd[1556]: Reached target sockets.target - Sockets. Jan 13 21:26:20.432177 systemd[1556]: Reached target basic.target - Basic System. Jan 13 21:26:20.432215 systemd[1556]: Reached target default.target - Main User Target. Jan 13 21:26:20.432248 systemd[1556]: Startup finished in 133ms. Jan 13 21:26:20.432951 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:26:20.434801 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:26:20.498800 systemd[1]: Started sshd@1-10.0.0.116:22-10.0.0.1:56698.service - OpenSSH per-connection server daemon (10.0.0.1:56698). Jan 13 21:26:20.536159 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 56698 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:20.537944 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:20.542417 systemd-logind[1437]: New session 2 of user core. Jan 13 21:26:20.552419 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:26:20.606130 sshd[1567]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:20.617646 systemd[1]: sshd@1-10.0.0.116:22-10.0.0.1:56698.service: Deactivated successfully. Jan 13 21:26:20.620016 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:26:20.621639 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:26:20.631611 systemd[1]: Started sshd@2-10.0.0.116:22-10.0.0.1:56712.service - OpenSSH per-connection server daemon (10.0.0.1:56712). Jan 13 21:26:20.632632 systemd-logind[1437]: Removed session 2. Jan 13 21:26:20.661763 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 56712 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:20.663513 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:20.668405 systemd-logind[1437]: New session 3 of user core. Jan 13 21:26:20.679496 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:26:20.729863 sshd[1574]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:20.757740 systemd[1]: sshd@2-10.0.0.116:22-10.0.0.1:56712.service: Deactivated successfully. Jan 13 21:26:20.759455 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:26:20.761044 systemd-logind[1437]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:26:20.771703 systemd[1]: Started sshd@3-10.0.0.116:22-10.0.0.1:56728.service - OpenSSH per-connection server daemon (10.0.0.1:56728). Jan 13 21:26:20.772939 systemd-logind[1437]: Removed session 3. Jan 13 21:26:20.800680 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 56728 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:20.802364 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:20.806592 systemd-logind[1437]: New session 4 of user core. Jan 13 21:26:20.822470 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:26:20.878090 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:20.889550 systemd[1]: sshd@3-10.0.0.116:22-10.0.0.1:56728.service: Deactivated successfully. Jan 13 21:26:20.891531 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:26:20.893105 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:26:20.894586 systemd[1]: Started sshd@4-10.0.0.116:22-10.0.0.1:56744.service - OpenSSH per-connection server daemon (10.0.0.1:56744). Jan 13 21:26:20.895485 systemd-logind[1437]: Removed session 4. Jan 13 21:26:20.940346 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 56744 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:20.942057 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:20.946190 systemd-logind[1437]: New session 5 of user core. Jan 13 21:26:20.956420 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:26:21.016147 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:26:21.016502 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:26:21.035673 sudo[1591]: pam_unix(sudo:session): session closed for user root Jan 13 21:26:21.038044 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:21.057930 systemd[1]: sshd@4-10.0.0.116:22-10.0.0.1:56744.service: Deactivated successfully. Jan 13 21:26:21.060494 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:26:21.062629 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:26:21.072773 systemd[1]: Started sshd@5-10.0.0.116:22-10.0.0.1:56758.service - OpenSSH per-connection server daemon (10.0.0.1:56758). Jan 13 21:26:21.073930 systemd-logind[1437]: Removed session 5. Jan 13 21:26:21.104876 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 56758 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:21.106995 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:21.111497 systemd-logind[1437]: New session 6 of user core. Jan 13 21:26:21.127589 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:26:21.182566 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:26:21.182918 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:26:21.187105 sudo[1600]: pam_unix(sudo:session): session closed for user root Jan 13 21:26:21.193846 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:26:21.194207 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:26:21.214696 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:26:21.216902 auditctl[1603]: No rules Jan 13 21:26:21.218531 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:26:21.218884 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:26:21.221069 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:26:21.252507 augenrules[1621]: No rules Jan 13 21:26:21.254409 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:26:21.255904 sudo[1599]: pam_unix(sudo:session): session closed for user root Jan 13 21:26:21.258012 sshd[1596]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:21.268692 systemd[1]: sshd@5-10.0.0.116:22-10.0.0.1:56758.service: Deactivated successfully. Jan 13 21:26:21.271202 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:26:21.273322 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:26:21.285784 systemd[1]: Started sshd@6-10.0.0.116:22-10.0.0.1:56772.service - OpenSSH per-connection server daemon (10.0.0.1:56772). Jan 13 21:26:21.286936 systemd-logind[1437]: Removed session 6. Jan 13 21:26:21.314847 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 56772 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:21.316577 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:21.321207 systemd-logind[1437]: New session 7 of user core. Jan 13 21:26:21.331559 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:26:21.385728 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:26:21.386079 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:26:21.813477 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:26:21.813662 (dockerd)[1650]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:26:22.346836 dockerd[1650]: time="2025-01-13T21:26:22.346748421Z" level=info msg="Starting up" Jan 13 21:26:22.694947 systemd[1]: var-lib-docker-metacopy\x2dcheck1089625417-merged.mount: Deactivated successfully. Jan 13 21:26:22.720832 dockerd[1650]: time="2025-01-13T21:26:22.720767470Z" level=info msg="Loading containers: start." Jan 13 21:26:22.854285 kernel: Initializing XFRM netlink socket Jan 13 21:26:22.987755 systemd-networkd[1394]: docker0: Link UP Jan 13 21:26:23.012947 dockerd[1650]: time="2025-01-13T21:26:23.012889563Z" level=info msg="Loading containers: done." Jan 13 21:26:23.032350 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2953104106-merged.mount: Deactivated successfully. Jan 13 21:26:23.035430 dockerd[1650]: time="2025-01-13T21:26:23.035385641Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:26:23.035535 dockerd[1650]: time="2025-01-13T21:26:23.035512119Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:26:23.035653 dockerd[1650]: time="2025-01-13T21:26:23.035634398Z" level=info msg="Daemon has completed initialization" Jan 13 21:26:23.078433 dockerd[1650]: time="2025-01-13T21:26:23.078321982Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:26:23.078623 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:26:23.859067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:26:23.874540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:26:24.094315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:26:24.094830 (kubelet)[1805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:26:24.264675 containerd[1453]: time="2025-01-13T21:26:24.264513114Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:26:24.286343 kubelet[1805]: E0113 21:26:24.286252 1805 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:26:24.292846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:26:24.293049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:26:25.148165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927690903.mount: Deactivated successfully. Jan 13 21:26:26.873251 containerd[1453]: time="2025-01-13T21:26:26.873172451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:26.901055 containerd[1453]: time="2025-01-13T21:26:26.900965594Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 21:26:26.929429 containerd[1453]: time="2025-01-13T21:26:26.929356787Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:26.970959 containerd[1453]: time="2025-01-13T21:26:26.970903672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:26.971997 containerd[1453]: time="2025-01-13T21:26:26.971962558Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.707384302s" Jan 13 21:26:26.972076 containerd[1453]: time="2025-01-13T21:26:26.972009636Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 21:26:27.000571 containerd[1453]: time="2025-01-13T21:26:27.000519893Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:26:29.535386 containerd[1453]: time="2025-01-13T21:26:29.535145090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:29.536700 containerd[1453]: time="2025-01-13T21:26:29.536640515Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 21:26:29.538418 containerd[1453]: time="2025-01-13T21:26:29.538387501Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:29.541390 containerd[1453]: time="2025-01-13T21:26:29.541346039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:29.542416 containerd[1453]: time="2025-01-13T21:26:29.542373917Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.541810492s" Jan 13 21:26:29.542416 containerd[1453]: time="2025-01-13T21:26:29.542409664Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 21:26:29.570607 containerd[1453]: time="2025-01-13T21:26:29.570552061Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:26:30.860570 containerd[1453]: time="2025-01-13T21:26:30.860496988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:30.861358 containerd[1453]: time="2025-01-13T21:26:30.861279496Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 21:26:30.862546 containerd[1453]: time="2025-01-13T21:26:30.862500075Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:30.867655 containerd[1453]: time="2025-01-13T21:26:30.867463343Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.296864173s" Jan 13 21:26:30.867655 containerd[1453]: time="2025-01-13T21:26:30.867532903Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 21:26:30.868208 containerd[1453]: time="2025-01-13T21:26:30.867925479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:30.893118 containerd[1453]: time="2025-01-13T21:26:30.893032965Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:26:31.908480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount822636556.mount: Deactivated successfully. Jan 13 21:26:32.921922 containerd[1453]: time="2025-01-13T21:26:32.921850061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:32.955073 containerd[1453]: time="2025-01-13T21:26:32.954960765Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 21:26:32.988981 containerd[1453]: time="2025-01-13T21:26:32.988916264Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:33.033856 containerd[1453]: time="2025-01-13T21:26:33.033798725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:33.034380 containerd[1453]: time="2025-01-13T21:26:33.034342495Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.141264565s" Jan 13 21:26:33.034413 containerd[1453]: time="2025-01-13T21:26:33.034388811Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 21:26:33.058452 containerd[1453]: time="2025-01-13T21:26:33.058406153Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:26:33.739278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127137008.mount: Deactivated successfully. Jan 13 21:26:34.417737 containerd[1453]: time="2025-01-13T21:26:34.417676780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:34.418405 containerd[1453]: time="2025-01-13T21:26:34.418348179Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:26:34.419540 containerd[1453]: time="2025-01-13T21:26:34.419495361Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:34.423175 containerd[1453]: time="2025-01-13T21:26:34.423128414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:34.424232 containerd[1453]: time="2025-01-13T21:26:34.424196707Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.36574535s" Jan 13 21:26:34.424232 containerd[1453]: time="2025-01-13T21:26:34.424230340Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:26:34.445640 containerd[1453]: time="2025-01-13T21:26:34.445595087Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:26:34.543388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:26:34.552416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:26:34.693997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:26:34.698338 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:26:34.741428 kubelet[1976]: E0113 21:26:34.741305 1976 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:26:34.746427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:26:34.746696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:26:35.190110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446368949.mount: Deactivated successfully. Jan 13 21:26:35.198159 containerd[1453]: time="2025-01-13T21:26:35.198095864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:35.198956 containerd[1453]: time="2025-01-13T21:26:35.198888421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 21:26:35.200083 containerd[1453]: time="2025-01-13T21:26:35.200039349Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:35.202247 containerd[1453]: time="2025-01-13T21:26:35.202198749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:35.202955 containerd[1453]: time="2025-01-13T21:26:35.202909191Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 757.276534ms" Jan 13 21:26:35.202955 containerd[1453]: time="2025-01-13T21:26:35.202946681Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:26:35.227676 containerd[1453]: time="2025-01-13T21:26:35.227621115Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:26:35.794504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084602255.mount: Deactivated successfully. Jan 13 21:26:38.463949 containerd[1453]: time="2025-01-13T21:26:38.463848536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:38.464877 containerd[1453]: time="2025-01-13T21:26:38.464822563Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 21:26:38.466235 containerd[1453]: time="2025-01-13T21:26:38.466196620Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:38.469475 containerd[1453]: time="2025-01-13T21:26:38.469445533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:38.470602 containerd[1453]: time="2025-01-13T21:26:38.470543021Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.242876882s" Jan 13 21:26:38.470602 containerd[1453]: time="2025-01-13T21:26:38.470598565Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 21:26:40.725131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:26:40.735579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:26:40.754888 systemd[1]: Reloading requested from client PID 2122 ('systemctl') (unit session-7.scope)... Jan 13 21:26:40.754903 systemd[1]: Reloading... Jan 13 21:26:40.833296 zram_generator::config[2161]: No configuration found. Jan 13 21:26:41.058927 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:26:41.134274 systemd[1]: Reloading finished in 378 ms. Jan 13 21:26:41.188726 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:26:41.192873 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:26:41.193120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:26:41.207616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:26:41.350022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:26:41.354545 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:26:41.404813 kubelet[2211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:26:41.404813 kubelet[2211]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:26:41.404813 kubelet[2211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:26:41.405237 kubelet[2211]: I0113 21:26:41.404860 2211 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:26:41.590145 kubelet[2211]: I0113 21:26:41.590103 2211 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:26:41.590145 kubelet[2211]: I0113 21:26:41.590141 2211 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:26:41.590396 kubelet[2211]: I0113 21:26:41.590378 2211 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:26:41.607800 kubelet[2211]: E0113 21:26:41.607670 2211 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:41.608400 kubelet[2211]: I0113 21:26:41.608369 2211 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:26:41.622481 kubelet[2211]: I0113 21:26:41.622442 2211 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:26:41.623203 kubelet[2211]: I0113 21:26:41.623177 2211 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:26:41.623397 kubelet[2211]: I0113 21:26:41.623370 2211 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:26:41.623818 kubelet[2211]: I0113 21:26:41.623791 2211 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:26:41.623818 kubelet[2211]: I0113 21:26:41.623811 2211 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:26:41.623964 kubelet[2211]: I0113 21:26:41.623941 2211 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:26:41.624095 kubelet[2211]: I0113 21:26:41.624048 2211 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:26:41.624095 kubelet[2211]: I0113 21:26:41.624070 2211 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:26:41.624291 kubelet[2211]: I0113 21:26:41.624118 2211 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:26:41.624291 kubelet[2211]: I0113 21:26:41.624144 2211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:26:41.624888 kubelet[2211]: W0113 21:26:41.624587 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:41.624888 kubelet[2211]: E0113 21:26:41.624651 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:41.624888 kubelet[2211]: W0113 21:26:41.624801 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:41.624888 kubelet[2211]: E0113 21:26:41.624852 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:41.625409 kubelet[2211]: I0113 21:26:41.625366 2211 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:26:41.627854 kubelet[2211]: I0113 21:26:41.627831 2211 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:26:41.629028 kubelet[2211]: W0113 21:26:41.628996 2211 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:26:41.629692 kubelet[2211]: I0113 21:26:41.629664 2211 server.go:1256] "Started kubelet" Jan 13 21:26:41.630516 kubelet[2211]: I0113 21:26:41.630374 2211 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:26:41.631225 kubelet[2211]: I0113 21:26:41.630700 2211 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:26:41.631225 kubelet[2211]: I0113 21:26:41.630756 2211 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:26:41.631225 kubelet[2211]: I0113 21:26:41.631066 2211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:26:41.631610 kubelet[2211]: I0113 21:26:41.631584 2211 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:26:41.633666 kubelet[2211]: E0113 21:26:41.633110 2211 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:26:41.633666 kubelet[2211]: I0113 21:26:41.633165 2211 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:26:41.633666 kubelet[2211]: I0113 21:26:41.633286 2211 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:26:41.633666 kubelet[2211]: I0113 21:26:41.633345 2211 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:26:41.633816 kubelet[2211]: W0113 21:26:41.633725 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:41.633816 kubelet[2211]: E0113 21:26:41.633772 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:41.690366 kubelet[2211]: E0113 21:26:41.689925 2211 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5dac77c0fde2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:26:41.62964221 +0000 UTC m=+0.270992038,LastTimestamp:2025-01-13 21:26:41.62964221 +0000 UTC m=+0.270992038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:26:41.690631 kubelet[2211]: I0113 21:26:41.690501 2211 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:26:41.691431 kubelet[2211]: I0113 21:26:41.691391 2211 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:26:41.692789 kubelet[2211]: E0113 21:26:41.692762 2211 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:26:41.693067 kubelet[2211]: E0113 21:26:41.635909 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="200ms" Jan 13 21:26:41.720910 kubelet[2211]: I0113 21:26:41.720874 2211 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:26:41.723506 kubelet[2211]: I0113 21:26:41.723429 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:26:41.732704 kubelet[2211]: I0113 21:26:41.732667 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:26:41.732800 kubelet[2211]: I0113 21:26:41.732718 2211 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:26:41.732800 kubelet[2211]: I0113 21:26:41.732743 2211 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:26:41.732890 kubelet[2211]: E0113 21:26:41.732804 2211 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:26:41.734005 kubelet[2211]: W0113 21:26:41.733425 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:41.734005 kubelet[2211]: E0113 21:26:41.733485 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:41.739586 kubelet[2211]: I0113 21:26:41.739558 2211 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:26:41.740055 kubelet[2211]: E0113 21:26:41.740024 2211 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jan 13 21:26:41.741481 kubelet[2211]: I0113 21:26:41.741441 2211 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:26:41.741481 kubelet[2211]: I0113 21:26:41.741465 2211 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:26:41.741481 kubelet[2211]: I0113 21:26:41.741484 2211 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:26:41.834006 kubelet[2211]: E0113 21:26:41.833951 2211 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:26:41.893986 kubelet[2211]: E0113 21:26:41.893831 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="400ms" Jan 13 21:26:41.941720 kubelet[2211]: I0113 21:26:41.941682 2211 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:26:41.942060 kubelet[2211]: E0113 21:26:41.942030 2211 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jan 13 21:26:42.034345 kubelet[2211]: E0113 21:26:42.034287 2211 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:26:42.114703 kubelet[2211]: I0113 21:26:42.114624 2211 policy_none.go:49] "None policy: Start" Jan 13 21:26:42.115665 kubelet[2211]: I0113 21:26:42.115638 2211 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:26:42.115719 kubelet[2211]: I0113 21:26:42.115670 2211 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:26:42.123853 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:26:42.145715 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:26:42.148956 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:26:42.158318 kubelet[2211]: I0113 21:26:42.158290 2211 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:26:42.158736 kubelet[2211]: I0113 21:26:42.158652 2211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:26:42.160118 kubelet[2211]: E0113 21:26:42.160097 2211 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:26:42.296315 kubelet[2211]: E0113 21:26:42.296242 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="800ms" Jan 13 21:26:42.344543 kubelet[2211]: I0113 21:26:42.344509 2211 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:26:42.344906 kubelet[2211]: E0113 21:26:42.344889 2211 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jan 13 21:26:42.435627 kubelet[2211]: I0113 21:26:42.435415 2211 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:26:42.436808 kubelet[2211]: I0113 21:26:42.436782 2211 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:26:42.438160 kubelet[2211]: I0113 21:26:42.437637 2211 topology_manager.go:215] "Topology Admit Handler" podUID="208c4c4495b00a2da265b5765dab0bd9" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:26:42.443471 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Jan 13 21:26:42.454104 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Jan 13 21:26:42.473737 systemd[1]: Created slice kubepods-burstable-pod208c4c4495b00a2da265b5765dab0bd9.slice - libcontainer container kubepods-burstable-pod208c4c4495b00a2da265b5765dab0bd9.slice. Jan 13 21:26:42.539133 kubelet[2211]: I0113 21:26:42.539047 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:26:42.539133 kubelet[2211]: I0113 21:26:42.539130 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:26:42.539383 kubelet[2211]: I0113 21:26:42.539163 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:26:42.539383 kubelet[2211]: I0113 21:26:42.539196 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/208c4c4495b00a2da265b5765dab0bd9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"208c4c4495b00a2da265b5765dab0bd9\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:26:42.539383 kubelet[2211]: I0113 21:26:42.539304 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:26:42.539493 kubelet[2211]: I0113 21:26:42.539417 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:26:42.539493 kubelet[2211]: I0113 21:26:42.539477 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:26:42.539554 kubelet[2211]: I0113 21:26:42.539520 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/208c4c4495b00a2da265b5765dab0bd9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"208c4c4495b00a2da265b5765dab0bd9\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:26:42.539599 kubelet[2211]: I0113 21:26:42.539573 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/208c4c4495b00a2da265b5765dab0bd9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"208c4c4495b00a2da265b5765dab0bd9\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:26:42.753379 kubelet[2211]: E0113 21:26:42.753352 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:42.754129 containerd[1453]: time="2025-01-13T21:26:42.754080482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:42.772419 kubelet[2211]: E0113 21:26:42.772357 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:42.773026 containerd[1453]: time="2025-01-13T21:26:42.772985498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:42.776385 kubelet[2211]: E0113 21:26:42.776352 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:42.776939 containerd[1453]: time="2025-01-13T21:26:42.776895219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:208c4c4495b00a2da265b5765dab0bd9,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:42.792496 kubelet[2211]: W0113 21:26:42.792451 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:42.792566 kubelet[2211]: E0113 21:26:42.792504 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:42.862516 kubelet[2211]: W0113 21:26:42.862420 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:42.862516 kubelet[2211]: E0113 21:26:42.862514 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:43.097788 kubelet[2211]: E0113 21:26:43.097656 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="1.6s" Jan 13 21:26:43.146764 kubelet[2211]: I0113 21:26:43.146729 2211 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:26:43.147161 kubelet[2211]: E0113 21:26:43.147143 2211 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jan 13 21:26:43.159858 kubelet[2211]: W0113 21:26:43.159780 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:43.159858 kubelet[2211]: E0113 21:26:43.159858 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:43.216336 kubelet[2211]: W0113 21:26:43.216215 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:43.216336 kubelet[2211]: E0113 21:26:43.216321 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:43.432950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1995280058.mount: Deactivated successfully. Jan 13 21:26:43.439656 containerd[1453]: time="2025-01-13T21:26:43.439614602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:26:43.440610 containerd[1453]: time="2025-01-13T21:26:43.440556289Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:26:43.441399 containerd[1453]: time="2025-01-13T21:26:43.441364609Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:26:43.442142 containerd[1453]: time="2025-01-13T21:26:43.442103427Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:26:43.443071 containerd[1453]: time="2025-01-13T21:26:43.443045314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:26:43.443860 containerd[1453]: time="2025-01-13T21:26:43.443836041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:26:43.444671 containerd[1453]: time="2025-01-13T21:26:43.444640735Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:26:43.448724 containerd[1453]: time="2025-01-13T21:26:43.448687028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:26:43.449493 containerd[1453]: time="2025-01-13T21:26:43.449460192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 672.482133ms" Jan 13 21:26:43.450668 containerd[1453]: time="2025-01-13T21:26:43.450623664Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 677.552943ms" Jan 13 21:26:43.451830 containerd[1453]: time="2025-01-13T21:26:43.451798768Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 697.636449ms" Jan 13 21:26:43.638987 kubelet[2211]: E0113 21:26:43.638932 2211 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.116:6443: connect: connection refused Jan 13 21:26:43.703901 containerd[1453]: time="2025-01-13T21:26:43.703521472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:43.704445 containerd[1453]: time="2025-01-13T21:26:43.703753136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:43.705113 containerd[1453]: time="2025-01-13T21:26:43.705032080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:43.705504 containerd[1453]: time="2025-01-13T21:26:43.705362895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:43.764713 systemd[1]: Started cri-containerd-0516a11ae9f27fd51d484abb611c317e2471f0256db4ebf9833fae58fdfcd491.scope - libcontainer container 0516a11ae9f27fd51d484abb611c317e2471f0256db4ebf9833fae58fdfcd491. Jan 13 21:26:43.768720 containerd[1453]: time="2025-01-13T21:26:43.768226645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:43.768720 containerd[1453]: time="2025-01-13T21:26:43.768445926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:43.768720 containerd[1453]: time="2025-01-13T21:26:43.768476184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:43.768720 containerd[1453]: time="2025-01-13T21:26:43.768608688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:43.775796 containerd[1453]: time="2025-01-13T21:26:43.773807102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:43.775796 containerd[1453]: time="2025-01-13T21:26:43.773878058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:43.775796 containerd[1453]: time="2025-01-13T21:26:43.773887916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:43.775796 containerd[1453]: time="2025-01-13T21:26:43.773981466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:43.806415 systemd[1]: Started cri-containerd-9e2f7187dcfcdc5619035ff40eb0a9c257ab697a520cac16c458b2fb11a3d9c3.scope - libcontainer container 9e2f7187dcfcdc5619035ff40eb0a9c257ab697a520cac16c458b2fb11a3d9c3. Jan 13 21:26:43.809680 systemd[1]: Started cri-containerd-330fe6ada2025c53e0045d22ff9a506b864dfe64db9fde66c6f56f2f3353bdcc.scope - libcontainer container 330fe6ada2025c53e0045d22ff9a506b864dfe64db9fde66c6f56f2f3353bdcc. Jan 13 21:26:43.833784 containerd[1453]: time="2025-01-13T21:26:43.833477245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0516a11ae9f27fd51d484abb611c317e2471f0256db4ebf9833fae58fdfcd491\"" Jan 13 21:26:43.835354 kubelet[2211]: E0113 21:26:43.835071 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:43.839095 containerd[1453]: time="2025-01-13T21:26:43.839055577Z" level=info msg="CreateContainer within sandbox \"0516a11ae9f27fd51d484abb611c317e2471f0256db4ebf9833fae58fdfcd491\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:26:43.864306 containerd[1453]: time="2025-01-13T21:26:43.864146205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:208c4c4495b00a2da265b5765dab0bd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"330fe6ada2025c53e0045d22ff9a506b864dfe64db9fde66c6f56f2f3353bdcc\"" Jan 13 21:26:43.865151 kubelet[2211]: E0113 21:26:43.865116 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:43.865874 containerd[1453]: time="2025-01-13T21:26:43.865725737Z" level=info msg="CreateContainer within sandbox \"0516a11ae9f27fd51d484abb611c317e2471f0256db4ebf9833fae58fdfcd491\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf09d57c5e0a2576e62ebf7815da619cffb15bb5d3af5d15167bde134ece6995\"" Jan 13 21:26:43.866635 containerd[1453]: time="2025-01-13T21:26:43.866600594Z" level=info msg="StartContainer for \"bf09d57c5e0a2576e62ebf7815da619cffb15bb5d3af5d15167bde134ece6995\"" Jan 13 21:26:43.869126 containerd[1453]: time="2025-01-13T21:26:43.869016930Z" level=info msg="CreateContainer within sandbox \"330fe6ada2025c53e0045d22ff9a506b864dfe64db9fde66c6f56f2f3353bdcc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:26:43.943600 containerd[1453]: time="2025-01-13T21:26:43.943552695Z" level=info msg="CreateContainer within sandbox \"330fe6ada2025c53e0045d22ff9a506b864dfe64db9fde66c6f56f2f3353bdcc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bf427a13d20aa06a945912abb801ae4a1a5e9b0ce363ac7dd5cfd0c9ef7d2766\"" Jan 13 21:26:43.944451 containerd[1453]: time="2025-01-13T21:26:43.944408807Z" level=info msg="StartContainer for \"bf427a13d20aa06a945912abb801ae4a1a5e9b0ce363ac7dd5cfd0c9ef7d2766\"" Jan 13 21:26:43.945706 containerd[1453]: time="2025-01-13T21:26:43.945662803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e2f7187dcfcdc5619035ff40eb0a9c257ab697a520cac16c458b2fb11a3d9c3\"" Jan 13 21:26:43.946687 kubelet[2211]: E0113 21:26:43.946655 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:43.949223 containerd[1453]: time="2025-01-13T21:26:43.949188197Z" level=info msg="CreateContainer within sandbox \"9e2f7187dcfcdc5619035ff40eb0a9c257ab697a520cac16c458b2fb11a3d9c3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:26:43.952132 systemd[1]: Started cri-containerd-bf09d57c5e0a2576e62ebf7815da619cffb15bb5d3af5d15167bde134ece6995.scope - libcontainer container bf09d57c5e0a2576e62ebf7815da619cffb15bb5d3af5d15167bde134ece6995. Jan 13 21:26:43.969482 containerd[1453]: time="2025-01-13T21:26:43.968702807Z" level=info msg="CreateContainer within sandbox \"9e2f7187dcfcdc5619035ff40eb0a9c257ab697a520cac16c458b2fb11a3d9c3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8eb9e375e05877d8631679f3672aa2967113f29442d130350603019a3ffae95f\"" Jan 13 21:26:43.969559 containerd[1453]: time="2025-01-13T21:26:43.969470901Z" level=info msg="StartContainer for \"8eb9e375e05877d8631679f3672aa2967113f29442d130350603019a3ffae95f\"" Jan 13 21:26:43.977602 systemd[1]: Started cri-containerd-bf427a13d20aa06a945912abb801ae4a1a5e9b0ce363ac7dd5cfd0c9ef7d2766.scope - libcontainer container bf427a13d20aa06a945912abb801ae4a1a5e9b0ce363ac7dd5cfd0c9ef7d2766. Jan 13 21:26:43.998410 containerd[1453]: time="2025-01-13T21:26:43.998235178Z" level=info msg="StartContainer for \"bf09d57c5e0a2576e62ebf7815da619cffb15bb5d3af5d15167bde134ece6995\" returns successfully" Jan 13 21:26:44.001505 systemd[1]: Started cri-containerd-8eb9e375e05877d8631679f3672aa2967113f29442d130350603019a3ffae95f.scope - libcontainer container 8eb9e375e05877d8631679f3672aa2967113f29442d130350603019a3ffae95f. Jan 13 21:26:44.028497 containerd[1453]: time="2025-01-13T21:26:44.028426840Z" level=info msg="StartContainer for \"bf427a13d20aa06a945912abb801ae4a1a5e9b0ce363ac7dd5cfd0c9ef7d2766\" returns successfully" Jan 13 21:26:44.052244 containerd[1453]: time="2025-01-13T21:26:44.052156293Z" level=info msg="StartContainer for \"8eb9e375e05877d8631679f3672aa2967113f29442d130350603019a3ffae95f\" returns successfully" Jan 13 21:26:44.745038 kubelet[2211]: E0113 21:26:44.744976 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:44.746921 kubelet[2211]: E0113 21:26:44.746891 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:44.748225 kubelet[2211]: I0113 21:26:44.748190 2211 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:26:44.749645 kubelet[2211]: E0113 21:26:44.749611 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:45.325640 kubelet[2211]: E0113 21:26:45.325588 2211 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 21:26:45.427740 kubelet[2211]: I0113 21:26:45.427677 2211 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:26:45.625160 kubelet[2211]: I0113 21:26:45.624984 2211 apiserver.go:52] "Watching apiserver" Jan 13 21:26:45.633902 kubelet[2211]: I0113 21:26:45.633863 2211 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:26:45.753830 kubelet[2211]: E0113 21:26:45.753789 2211 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 13 21:26:45.754350 kubelet[2211]: E0113 21:26:45.754342 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:46.377817 kubelet[2211]: E0113 21:26:46.377769 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:46.750212 kubelet[2211]: E0113 21:26:46.750181 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:47.985143 systemd[1]: Reloading requested from client PID 2494 ('systemctl') (unit session-7.scope)... Jan 13 21:26:47.985159 systemd[1]: Reloading... Jan 13 21:26:48.090298 zram_generator::config[2548]: No configuration found. Jan 13 21:26:48.204410 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:26:48.321038 systemd[1]: Reloading finished in 335 ms. Jan 13 21:26:48.364056 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:26:48.385792 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:26:48.386092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:26:48.386154 systemd[1]: kubelet.service: Consumed 1.057s CPU time, 116.5M memory peak, 0B memory swap peak. Jan 13 21:26:48.397461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:26:48.546867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:26:48.552416 (kubelet)[2578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:26:48.607226 kubelet[2578]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:26:48.607226 kubelet[2578]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:26:48.607226 kubelet[2578]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:26:48.607226 kubelet[2578]: I0113 21:26:48.607188 2578 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:26:48.613646 kubelet[2578]: I0113 21:26:48.613588 2578 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:26:48.613646 kubelet[2578]: I0113 21:26:48.613657 2578 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:26:48.614437 kubelet[2578]: I0113 21:26:48.614399 2578 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:26:48.616050 kubelet[2578]: I0113 21:26:48.616032 2578 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:26:48.617923 kubelet[2578]: I0113 21:26:48.617889 2578 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:26:48.625743 kubelet[2578]: I0113 21:26:48.625720 2578 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:26:48.625990 kubelet[2578]: I0113 21:26:48.625973 2578 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:26:48.626155 kubelet[2578]: I0113 21:26:48.626132 2578 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:26:48.626225 kubelet[2578]: I0113 21:26:48.626163 2578 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:26:48.626225 kubelet[2578]: I0113 21:26:48.626172 2578 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:26:48.626225 kubelet[2578]: I0113 21:26:48.626207 2578 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:26:48.626368 kubelet[2578]: I0113 21:26:48.626349 2578 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:26:48.626396 kubelet[2578]: I0113 21:26:48.626370 2578 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:26:48.626423 kubelet[2578]: I0113 21:26:48.626401 2578 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:26:48.626445 kubelet[2578]: I0113 21:26:48.626426 2578 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:26:48.626989 kubelet[2578]: I0113 21:26:48.626972 2578 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:26:48.627179 kubelet[2578]: I0113 21:26:48.627167 2578 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:26:48.628154 kubelet[2578]: I0113 21:26:48.628140 2578 server.go:1256] "Started kubelet" Jan 13 21:26:48.631140 kubelet[2578]: I0113 21:26:48.630652 2578 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:26:48.631140 kubelet[2578]: I0113 21:26:48.630722 2578 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:26:48.631140 kubelet[2578]: I0113 21:26:48.631003 2578 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:26:48.638015 kubelet[2578]: I0113 21:26:48.637968 2578 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:26:48.644064 kubelet[2578]: E0113 21:26:48.644030 2578 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:26:48.645064 kubelet[2578]: I0113 21:26:48.645034 2578 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:26:48.646619 kubelet[2578]: I0113 21:26:48.646583 2578 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:26:48.647552 kubelet[2578]: I0113 21:26:48.647527 2578 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:26:48.648097 kubelet[2578]: I0113 21:26:48.648078 2578 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:26:48.648471 kubelet[2578]: I0113 21:26:48.648457 2578 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:26:48.649128 kubelet[2578]: I0113 21:26:48.649102 2578 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:26:48.650386 kubelet[2578]: I0113 21:26:48.650357 2578 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:26:48.662222 kubelet[2578]: I0113 21:26:48.662195 2578 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:26:48.665791 kubelet[2578]: I0113 21:26:48.665763 2578 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:26:48.665939 kubelet[2578]: I0113 21:26:48.665921 2578 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:26:48.666121 kubelet[2578]: I0113 21:26:48.666103 2578 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:26:48.666237 kubelet[2578]: E0113 21:26:48.666221 2578 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:26:48.689977 kubelet[2578]: I0113 21:26:48.689931 2578 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:26:48.689977 kubelet[2578]: I0113 21:26:48.689959 2578 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:26:48.689977 kubelet[2578]: I0113 21:26:48.689977 2578 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:26:48.690977 kubelet[2578]: I0113 21:26:48.690149 2578 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:26:48.690977 kubelet[2578]: I0113 21:26:48.690192 2578 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:26:48.690977 kubelet[2578]: I0113 21:26:48.690200 2578 policy_none.go:49] "None policy: Start" Jan 13 21:26:48.690977 kubelet[2578]: I0113 21:26:48.690985 2578 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:26:48.691197 kubelet[2578]: I0113 21:26:48.691005 2578 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:26:48.691259 kubelet[2578]: I0113 21:26:48.691201 2578 state_mem.go:75] "Updated machine memory state" Jan 13 21:26:48.700940 kubelet[2578]: I0113 21:26:48.700545 2578 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:26:48.701180 kubelet[2578]: I0113 21:26:48.701087 2578 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:26:48.753640 kubelet[2578]: I0113 21:26:48.753592 2578 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:26:48.762596 kubelet[2578]: I0113 21:26:48.762350 2578 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 21:26:48.762596 kubelet[2578]: I0113 21:26:48.762443 2578 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:26:48.767315 kubelet[2578]: I0113 21:26:48.766787 2578 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:26:48.767315 kubelet[2578]: I0113 21:26:48.766879 2578 topology_manager.go:215] "Topology Admit Handler" podUID="208c4c4495b00a2da265b5765dab0bd9" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:26:48.767315 kubelet[2578]: I0113 21:26:48.766921 2578 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:26:48.773387 kubelet[2578]: E0113 21:26:48.773350 2578 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 13 21:26:48.949474 kubelet[2578]: I0113 21:26:48.949322 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/208c4c4495b00a2da265b5765dab0bd9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"208c4c4495b00a2da265b5765dab0bd9\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:26:48.949474 kubelet[2578]: I0113 21:26:48.949395 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:26:48.949474 kubelet[2578]: I0113 21:26:48.949443 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:26:48.949474 kubelet[2578]: I0113 21:26:48.949478 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:26:48.949941 kubelet[2578]: I0113 21:26:48.949510 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/208c4c4495b00a2da265b5765dab0bd9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"208c4c4495b00a2da265b5765dab0bd9\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:26:48.949941 kubelet[2578]: I0113 21:26:48.949551 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/208c4c4495b00a2da265b5765dab0bd9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"208c4c4495b00a2da265b5765dab0bd9\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:26:48.949941 kubelet[2578]: I0113 21:26:48.949615 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:26:48.949941 kubelet[2578]: I0113 21:26:48.949647 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:26:48.949941 kubelet[2578]: I0113 21:26:48.949675 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:26:49.073591 kubelet[2578]: E0113 21:26:49.073470 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:49.073901 kubelet[2578]: E0113 21:26:49.073870 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:49.074194 kubelet[2578]: E0113 21:26:49.074177 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:49.627050 kubelet[2578]: I0113 21:26:49.626994 2578 apiserver.go:52] "Watching apiserver" Jan 13 21:26:49.649577 kubelet[2578]: I0113 21:26:49.648709 2578 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:26:49.679967 kubelet[2578]: E0113 21:26:49.679930 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:49.680671 kubelet[2578]: E0113 21:26:49.680640 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:49.681585 kubelet[2578]: E0113 21:26:49.681494 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:49.777378 kubelet[2578]: I0113 21:26:49.777020 2578 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.776957035 podStartE2EDuration="3.776957035s" podCreationTimestamp="2025-01-13 21:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:26:49.768871765 +0000 UTC m=+1.212273804" watchObservedRunningTime="2025-01-13 21:26:49.776957035 +0000 UTC m=+1.220359074" Jan 13 21:26:49.788572 kubelet[2578]: I0113 21:26:49.788477 2578 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7884369119999999 podStartE2EDuration="1.788436912s" podCreationTimestamp="2025-01-13 21:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:26:49.779180801 +0000 UTC m=+1.222582830" watchObservedRunningTime="2025-01-13 21:26:49.788436912 +0000 UTC m=+1.231838951" Jan 13 21:26:49.799252 kubelet[2578]: I0113 21:26:49.799208 2578 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.799176197 podStartE2EDuration="1.799176197s" podCreationTimestamp="2025-01-13 21:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:26:49.788883141 +0000 UTC m=+1.232285180" watchObservedRunningTime="2025-01-13 21:26:49.799176197 +0000 UTC m=+1.242578236" Jan 13 21:26:50.681087 kubelet[2578]: E0113 21:26:50.681039 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:52.921000 kubelet[2578]: E0113 21:26:52.920964 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:53.205336 sudo[1632]: pam_unix(sudo:session): session closed for user root Jan 13 21:26:53.207290 sshd[1629]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:53.211470 systemd[1]: sshd@6-10.0.0.116:22-10.0.0.1:56772.service: Deactivated successfully. Jan 13 21:26:53.213606 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:26:53.213794 systemd[1]: session-7.scope: Consumed 4.811s CPU time, 190.3M memory peak, 0B memory swap peak. Jan 13 21:26:53.214241 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:26:53.215161 systemd-logind[1437]: Removed session 7. Jan 13 21:26:55.516847 update_engine[1441]: I20250113 21:26:55.516732 1441 update_attempter.cc:509] Updating boot flags... Jan 13 21:26:55.543724 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2687) Jan 13 21:26:55.589027 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2690) Jan 13 21:26:55.616338 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2690) Jan 13 21:26:58.191783 kubelet[2578]: E0113 21:26:58.191743 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:58.238573 kubelet[2578]: E0113 21:26:58.238491 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:58.691337 kubelet[2578]: E0113 21:26:58.691289 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:58.691693 kubelet[2578]: E0113 21:26:58.691376 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:01.961626 kubelet[2578]: I0113 21:27:01.961583 2578 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:27:01.962223 containerd[1453]: time="2025-01-13T21:27:01.962173477Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:27:01.962501 kubelet[2578]: I0113 21:27:01.962434 2578 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:27:02.634010 kubelet[2578]: I0113 21:27:02.633950 2578 topology_manager.go:215] "Topology Admit Handler" podUID="45593ce4-1b79-4336-aa1a-3f9e18e96c21" podNamespace="kube-system" podName="kube-proxy-mmwfp" Jan 13 21:27:02.641672 systemd[1]: Created slice kubepods-besteffort-pod45593ce4_1b79_4336_aa1a_3f9e18e96c21.slice - libcontainer container kubepods-besteffort-pod45593ce4_1b79_4336_aa1a_3f9e18e96c21.slice. Jan 13 21:27:02.833834 kubelet[2578]: I0113 21:27:02.833782 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45593ce4-1b79-4336-aa1a-3f9e18e96c21-xtables-lock\") pod \"kube-proxy-mmwfp\" (UID: \"45593ce4-1b79-4336-aa1a-3f9e18e96c21\") " pod="kube-system/kube-proxy-mmwfp" Jan 13 21:27:02.833834 kubelet[2578]: I0113 21:27:02.833828 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/45593ce4-1b79-4336-aa1a-3f9e18e96c21-kube-proxy\") pod \"kube-proxy-mmwfp\" (UID: \"45593ce4-1b79-4336-aa1a-3f9e18e96c21\") " pod="kube-system/kube-proxy-mmwfp" Jan 13 21:27:02.833834 kubelet[2578]: I0113 21:27:02.833849 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45593ce4-1b79-4336-aa1a-3f9e18e96c21-lib-modules\") pod \"kube-proxy-mmwfp\" (UID: \"45593ce4-1b79-4336-aa1a-3f9e18e96c21\") " pod="kube-system/kube-proxy-mmwfp" Jan 13 21:27:02.834051 kubelet[2578]: I0113 21:27:02.833871 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fttjx\" (UniqueName: \"kubernetes.io/projected/45593ce4-1b79-4336-aa1a-3f9e18e96c21-kube-api-access-fttjx\") pod \"kube-proxy-mmwfp\" (UID: \"45593ce4-1b79-4336-aa1a-3f9e18e96c21\") " pod="kube-system/kube-proxy-mmwfp" Jan 13 21:27:02.926258 kubelet[2578]: E0113 21:27:02.925785 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:02.948594 kubelet[2578]: E0113 21:27:02.948562 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:02.949194 containerd[1453]: time="2025-01-13T21:27:02.949153902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mmwfp,Uid:45593ce4-1b79-4336-aa1a-3f9e18e96c21,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:02.977005 containerd[1453]: time="2025-01-13T21:27:02.976859602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:02.977655 containerd[1453]: time="2025-01-13T21:27:02.977604829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:02.977941 containerd[1453]: time="2025-01-13T21:27:02.977643191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:02.977941 containerd[1453]: time="2025-01-13T21:27:02.977817891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:03.002458 systemd[1]: Started cri-containerd-270d4fe47c4106706559876a3325b9385e00a2a2e9bb1fdeb3e7b11d063ef468.scope - libcontainer container 270d4fe47c4106706559876a3325b9385e00a2a2e9bb1fdeb3e7b11d063ef468. Jan 13 21:27:03.035564 containerd[1453]: time="2025-01-13T21:27:03.035503517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mmwfp,Uid:45593ce4-1b79-4336-aa1a-3f9e18e96c21,Namespace:kube-system,Attempt:0,} returns sandbox id \"270d4fe47c4106706559876a3325b9385e00a2a2e9bb1fdeb3e7b11d063ef468\"" Jan 13 21:27:03.037529 kubelet[2578]: E0113 21:27:03.037494 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:03.043155 containerd[1453]: time="2025-01-13T21:27:03.041425604Z" level=info msg="CreateContainer within sandbox \"270d4fe47c4106706559876a3325b9385e00a2a2e9bb1fdeb3e7b11d063ef468\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:27:03.060178 kubelet[2578]: I0113 21:27:03.060120 2578 topology_manager.go:215] "Topology Admit Handler" podUID="e9f09765-32de-4c81-84b4-5ad71f83098f" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-kjzww" Jan 13 21:27:03.072959 containerd[1453]: time="2025-01-13T21:27:03.072909211Z" level=info msg="CreateContainer within sandbox \"270d4fe47c4106706559876a3325b9385e00a2a2e9bb1fdeb3e7b11d063ef468\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"93f3d09e7c542b992b1cc7857002d49cf9f7fce5df3c77649022804ecf18b56f\"" Jan 13 21:27:03.073750 containerd[1453]: time="2025-01-13T21:27:03.073724800Z" level=info msg="StartContainer for \"93f3d09e7c542b992b1cc7857002d49cf9f7fce5df3c77649022804ecf18b56f\"" Jan 13 21:27:03.075227 systemd[1]: Created slice kubepods-besteffort-pode9f09765_32de_4c81_84b4_5ad71f83098f.slice - libcontainer container kubepods-besteffort-pode9f09765_32de_4c81_84b4_5ad71f83098f.slice. Jan 13 21:27:03.115556 systemd[1]: Started cri-containerd-93f3d09e7c542b992b1cc7857002d49cf9f7fce5df3c77649022804ecf18b56f.scope - libcontainer container 93f3d09e7c542b992b1cc7857002d49cf9f7fce5df3c77649022804ecf18b56f. Jan 13 21:27:03.136703 kubelet[2578]: I0113 21:27:03.136643 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e9f09765-32de-4c81-84b4-5ad71f83098f-var-lib-calico\") pod \"tigera-operator-c7ccbd65-kjzww\" (UID: \"e9f09765-32de-4c81-84b4-5ad71f83098f\") " pod="tigera-operator/tigera-operator-c7ccbd65-kjzww" Jan 13 21:27:03.136703 kubelet[2578]: I0113 21:27:03.136711 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxmp9\" (UniqueName: \"kubernetes.io/projected/e9f09765-32de-4c81-84b4-5ad71f83098f-kube-api-access-lxmp9\") pod \"tigera-operator-c7ccbd65-kjzww\" (UID: \"e9f09765-32de-4c81-84b4-5ad71f83098f\") " pod="tigera-operator/tigera-operator-c7ccbd65-kjzww" Jan 13 21:27:03.158143 containerd[1453]: time="2025-01-13T21:27:03.158088075Z" level=info msg="StartContainer for \"93f3d09e7c542b992b1cc7857002d49cf9f7fce5df3c77649022804ecf18b56f\" returns successfully" Jan 13 21:27:03.379397 containerd[1453]: time="2025-01-13T21:27:03.379355070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-kjzww,Uid:e9f09765-32de-4c81-84b4-5ad71f83098f,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:27:03.410075 containerd[1453]: time="2025-01-13T21:27:03.409917079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:03.410075 containerd[1453]: time="2025-01-13T21:27:03.409984596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:03.410075 containerd[1453]: time="2025-01-13T21:27:03.410006739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:03.410413 containerd[1453]: time="2025-01-13T21:27:03.410141492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:03.431509 systemd[1]: Started cri-containerd-b298bb03b72096afe0255fee8cb3eb3b5e723b623cd262c719b43390efb856b1.scope - libcontainer container b298bb03b72096afe0255fee8cb3eb3b5e723b623cd262c719b43390efb856b1. Jan 13 21:27:03.475068 containerd[1453]: time="2025-01-13T21:27:03.475020826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-kjzww,Uid:e9f09765-32de-4c81-84b4-5ad71f83098f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b298bb03b72096afe0255fee8cb3eb3b5e723b623cd262c719b43390efb856b1\"" Jan 13 21:27:03.477448 containerd[1453]: time="2025-01-13T21:27:03.477416938Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:27:03.700857 kubelet[2578]: E0113 21:27:03.700601 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:03.700857 kubelet[2578]: E0113 21:27:03.700801 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:03.708958 kubelet[2578]: I0113 21:27:03.708847 2578 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mmwfp" podStartSLOduration=1.708764488 podStartE2EDuration="1.708764488s" podCreationTimestamp="2025-01-13 21:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:27:03.708610408 +0000 UTC m=+15.152012447" watchObservedRunningTime="2025-01-13 21:27:03.708764488 +0000 UTC m=+15.152166537" Jan 13 21:27:03.948508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2320338917.mount: Deactivated successfully. Jan 13 21:27:07.397707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1042043972.mount: Deactivated successfully. Jan 13 21:27:07.944777 containerd[1453]: time="2025-01-13T21:27:07.944731918Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:07.946757 containerd[1453]: time="2025-01-13T21:27:07.946701550Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764333" Jan 13 21:27:07.947916 containerd[1453]: time="2025-01-13T21:27:07.947703209Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:07.949981 containerd[1453]: time="2025-01-13T21:27:07.949944702Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:07.950613 containerd[1453]: time="2025-01-13T21:27:07.950587604Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.47297658s" Jan 13 21:27:07.950678 containerd[1453]: time="2025-01-13T21:27:07.950614705Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 21:27:07.952014 containerd[1453]: time="2025-01-13T21:27:07.951986991Z" level=info msg="CreateContainer within sandbox \"b298bb03b72096afe0255fee8cb3eb3b5e723b623cd262c719b43390efb856b1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:27:07.964861 containerd[1453]: time="2025-01-13T21:27:07.964812251Z" level=info msg="CreateContainer within sandbox \"b298bb03b72096afe0255fee8cb3eb3b5e723b623cd262c719b43390efb856b1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0bb58faa3aa1ebb1774ecf8fa24bca0479c3e62cdde8cd448bca424d81b4e7a9\"" Jan 13 21:27:07.965369 containerd[1453]: time="2025-01-13T21:27:07.965345867Z" level=info msg="StartContainer for \"0bb58faa3aa1ebb1774ecf8fa24bca0479c3e62cdde8cd448bca424d81b4e7a9\"" Jan 13 21:27:07.996388 systemd[1]: Started cri-containerd-0bb58faa3aa1ebb1774ecf8fa24bca0479c3e62cdde8cd448bca424d81b4e7a9.scope - libcontainer container 0bb58faa3aa1ebb1774ecf8fa24bca0479c3e62cdde8cd448bca424d81b4e7a9. Jan 13 21:27:08.067071 containerd[1453]: time="2025-01-13T21:27:08.067010420Z" level=info msg="StartContainer for \"0bb58faa3aa1ebb1774ecf8fa24bca0479c3e62cdde8cd448bca424d81b4e7a9\" returns successfully" Jan 13 21:27:08.715463 kubelet[2578]: I0113 21:27:08.715417 2578 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-kjzww" podStartSLOduration=1.241105431 podStartE2EDuration="5.715369079s" podCreationTimestamp="2025-01-13 21:27:03 +0000 UTC" firstStartedPulling="2025-01-13 21:27:03.476610967 +0000 UTC m=+14.920012996" lastFinishedPulling="2025-01-13 21:27:07.950874605 +0000 UTC m=+19.394276644" observedRunningTime="2025-01-13 21:27:08.715216892 +0000 UTC m=+20.158618931" watchObservedRunningTime="2025-01-13 21:27:08.715369079 +0000 UTC m=+20.158771118" Jan 13 21:27:10.848953 kubelet[2578]: I0113 21:27:10.848893 2578 topology_manager.go:215] "Topology Admit Handler" podUID="aefad872-491d-436f-ac04-70749d73857a" podNamespace="calico-system" podName="calico-typha-76d87cf7d4-drzsc" Jan 13 21:27:10.862888 systemd[1]: Created slice kubepods-besteffort-podaefad872_491d_436f_ac04_70749d73857a.slice - libcontainer container kubepods-besteffort-podaefad872_491d_436f_ac04_70749d73857a.slice. Jan 13 21:27:10.899293 kubelet[2578]: I0113 21:27:10.898593 2578 topology_manager.go:215] "Topology Admit Handler" podUID="2fe4b44c-08c5-4c04-bf51-f2446167f749" podNamespace="calico-system" podName="calico-node-8dht7" Jan 13 21:27:10.908302 systemd[1]: Created slice kubepods-besteffort-pod2fe4b44c_08c5_4c04_bf51_f2446167f749.slice - libcontainer container kubepods-besteffort-pod2fe4b44c_08c5_4c04_bf51_f2446167f749.slice. Jan 13 21:27:10.987986 kubelet[2578]: I0113 21:27:10.987935 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aefad872-491d-436f-ac04-70749d73857a-typha-certs\") pod \"calico-typha-76d87cf7d4-drzsc\" (UID: \"aefad872-491d-436f-ac04-70749d73857a\") " pod="calico-system/calico-typha-76d87cf7d4-drzsc" Jan 13 21:27:10.988409 kubelet[2578]: I0113 21:27:10.988042 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aefad872-491d-436f-ac04-70749d73857a-tigera-ca-bundle\") pod \"calico-typha-76d87cf7d4-drzsc\" (UID: \"aefad872-491d-436f-ac04-70749d73857a\") " pod="calico-system/calico-typha-76d87cf7d4-drzsc" Jan 13 21:27:10.988409 kubelet[2578]: I0113 21:27:10.988075 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpk26\" (UniqueName: \"kubernetes.io/projected/aefad872-491d-436f-ac04-70749d73857a-kube-api-access-hpk26\") pod \"calico-typha-76d87cf7d4-drzsc\" (UID: \"aefad872-491d-436f-ac04-70749d73857a\") " pod="calico-system/calico-typha-76d87cf7d4-drzsc" Jan 13 21:27:11.040100 kubelet[2578]: I0113 21:27:11.038242 2578 topology_manager.go:215] "Topology Admit Handler" podUID="4b709ff7-1b29-4a55-8a27-61c5d7be7f36" podNamespace="calico-system" podName="csi-node-driver-c4drk" Jan 13 21:27:11.040100 kubelet[2578]: E0113 21:27:11.038601 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c4drk" podUID="4b709ff7-1b29-4a55-8a27-61c5d7be7f36" Jan 13 21:27:11.091527 kubelet[2578]: I0113 21:27:11.089290 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2fe4b44c-08c5-4c04-bf51-f2446167f749-flexvol-driver-host\") pod \"calico-node-8dht7\" (UID: \"2fe4b44c-08c5-4c04-bf51-f2446167f749\") " pod="calico-system/calico-node-8dht7" Jan 13 21:27:11.091527 kubelet[2578]: I0113 21:27:11.089348 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fe4b44c-08c5-4c04-bf51-f2446167f749-tigera-ca-bundle\") pod \"calico-node-8dht7\" (UID: \"2fe4b44c-08c5-4c04-bf51-f2446167f749\") " pod="calico-system/calico-node-8dht7" Jan 13 21:27:11.091527 kubelet[2578]: I0113 21:27:11.089379 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2fe4b44c-08c5-4c04-bf51-f2446167f749-cni-net-dir\") pod \"calico-node-8dht7\" (UID: \"2fe4b44c-08c5-4c04-bf51-f2446167f749\") " pod="calico-system/calico-node-8dht7" Jan 13 21:27:11.091527 kubelet[2578]: I0113 21:27:11.089405 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzm24\" (UniqueName: \"kubernetes.io/projected/2fe4b44c-08c5-4c04-bf51-f2446167f749-kube-api-access-mzm24\") pod \"calico-node-8dht7\" (UID: \"2fe4b44c-08c5-4c04-bf51-f2446167f749\") " pod="calico-system/calico-node-8dht7" Jan 13 21:27:11.091527 kubelet[2578]: I0113 21:27:11.089433 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2fe4b44c-08c5-4c04-bf51-f2446167f749-cni-log-dir\") pod \"calico-node-8dht7\" (UID: \"2fe4b44c-08c5-4c04-bf51-f2446167f749\") " pod="calico-system/calico-node-8dht7" Jan 13 21:27:11.091803 kubelet[2578]: I0113 21:27:11.089459 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fe4b44c-08c5-4c04-bf51-f2446167f749-xtables-lock\") pod \"calico-node-8dht7\" (UID: \"2fe4b44c-08c5-4c04-bf51-f2446167f749\") " pod="calico-system/calico-node-8dht7" Jan 13 21:27:11.091803 kubelet[2578]: I0113 21:27:11.089486 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fe4b44c-08c5-4c04-bf51-f2446167f749-lib-modules\") pod \"calico-node-8dht7\" (UID: \"2fe4b44c-08c5-4c04-bf51-f2446167f749\") " pod="calico-system/calico-node-8dht7" Jan 13 21:27:11.091803 kubelet[2578]: I0113 21:27:11.089510 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2fe4b44c-08c5-4c04-bf51-f2446167f749-var-run-calico\") pod \"calico-node-8dht7\" (UID: \"2fe4b44c-08c5-4c04-bf51-f2446167f749\") " pod="calico-system/calico-node-8dht7" Jan 13 21:27:11.091803 kubelet[2578]: I0113 21:27:11.089535 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2fe4b44c-08c5-4c04-bf51-f2446167f749-var-lib-calico\") pod \"calico-node-8dht7\" (UID: \"2fe4b44c-08c5-4c04-bf51-f2446167f749\") " pod="calico-system/calico-node-8dht7" Jan 13 21:27:11.091803 kubelet[2578]: I0113 21:27:11.089575 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2fe4b44c-08c5-4c04-bf51-f2446167f749-cni-bin-dir\") pod \"calico-node-8dht7\" (UID: \"2fe4b44c-08c5-4c04-bf51-f2446167f749\") " pod="calico-system/calico-node-8dht7" Jan 13 21:27:11.091976 kubelet[2578]: I0113 21:27:11.089637 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2fe4b44c-08c5-4c04-bf51-f2446167f749-policysync\") pod \"calico-node-8dht7\" (UID: \"2fe4b44c-08c5-4c04-bf51-f2446167f749\") " pod="calico-system/calico-node-8dht7" Jan 13 21:27:11.091976 kubelet[2578]: I0113 21:27:11.089664 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2fe4b44c-08c5-4c04-bf51-f2446167f749-node-certs\") pod \"calico-node-8dht7\" (UID: \"2fe4b44c-08c5-4c04-bf51-f2446167f749\") " pod="calico-system/calico-node-8dht7" Jan 13 21:27:11.168728 kubelet[2578]: E0113 21:27:11.167908 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:11.168900 containerd[1453]: time="2025-01-13T21:27:11.168324567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76d87cf7d4-drzsc,Uid:aefad872-491d-436f-ac04-70749d73857a,Namespace:calico-system,Attempt:0,}" Jan 13 21:27:11.190589 kubelet[2578]: I0113 21:27:11.190532 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b709ff7-1b29-4a55-8a27-61c5d7be7f36-kubelet-dir\") pod \"csi-node-driver-c4drk\" (UID: \"4b709ff7-1b29-4a55-8a27-61c5d7be7f36\") " pod="calico-system/csi-node-driver-c4drk" Jan 13 21:27:11.192310 kubelet[2578]: I0113 21:27:11.191352 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4b709ff7-1b29-4a55-8a27-61c5d7be7f36-varrun\") pod \"csi-node-driver-c4drk\" (UID: \"4b709ff7-1b29-4a55-8a27-61c5d7be7f36\") " pod="calico-system/csi-node-driver-c4drk" Jan 13 21:27:11.192501 kubelet[2578]: E0113 21:27:11.192451 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.192501 kubelet[2578]: W0113 21:27:11.192491 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.192851 kubelet[2578]: E0113 21:27:11.192524 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.192851 kubelet[2578]: I0113 21:27:11.192550 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4b709ff7-1b29-4a55-8a27-61c5d7be7f36-socket-dir\") pod \"csi-node-driver-c4drk\" (UID: \"4b709ff7-1b29-4a55-8a27-61c5d7be7f36\") " pod="calico-system/csi-node-driver-c4drk" Jan 13 21:27:11.193032 kubelet[2578]: E0113 21:27:11.192994 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.193032 kubelet[2578]: W0113 21:27:11.193007 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.193287 kubelet[2578]: E0113 21:27:11.193235 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.193770 kubelet[2578]: E0113 21:27:11.193740 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.193770 kubelet[2578]: W0113 21:27:11.193751 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.194926 kubelet[2578]: E0113 21:27:11.193962 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.194926 kubelet[2578]: E0113 21:27:11.194549 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.194926 kubelet[2578]: W0113 21:27:11.194558 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.194926 kubelet[2578]: E0113 21:27:11.194636 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.195155 kubelet[2578]: E0113 21:27:11.195134 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.195155 kubelet[2578]: W0113 21:27:11.195147 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.195358 kubelet[2578]: E0113 21:27:11.195338 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.196783 containerd[1453]: time="2025-01-13T21:27:11.196610033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:11.196783 containerd[1453]: time="2025-01-13T21:27:11.196732002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:11.197026 containerd[1453]: time="2025-01-13T21:27:11.196765846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:11.197231 containerd[1453]: time="2025-01-13T21:27:11.197044420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:11.199890 kubelet[2578]: E0113 21:27:11.199790 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.199890 kubelet[2578]: W0113 21:27:11.199813 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.200464 kubelet[2578]: E0113 21:27:11.200332 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.200464 kubelet[2578]: W0113 21:27:11.200346 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.203280 kubelet[2578]: E0113 21:27:11.202407 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.203280 kubelet[2578]: E0113 21:27:11.202457 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.203414 kubelet[2578]: E0113 21:27:11.203342 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.203414 kubelet[2578]: W0113 21:27:11.203355 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.203509 kubelet[2578]: E0113 21:27:11.203452 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.205984 kubelet[2578]: E0113 21:27:11.205959 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.205984 kubelet[2578]: W0113 21:27:11.205979 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.206075 kubelet[2578]: E0113 21:27:11.205997 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.206328 kubelet[2578]: E0113 21:27:11.206304 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.206328 kubelet[2578]: W0113 21:27:11.206323 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.209400 kubelet[2578]: E0113 21:27:11.207427 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.209400 kubelet[2578]: E0113 21:27:11.208428 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.209400 kubelet[2578]: W0113 21:27:11.208440 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.209400 kubelet[2578]: E0113 21:27:11.208457 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.209400 kubelet[2578]: E0113 21:27:11.209222 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.209400 kubelet[2578]: W0113 21:27:11.209232 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.209400 kubelet[2578]: E0113 21:27:11.209364 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.210286 kubelet[2578]: E0113 21:27:11.210226 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.210286 kubelet[2578]: W0113 21:27:11.210278 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.210689 kubelet[2578]: E0113 21:27:11.210357 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.210938 kubelet[2578]: E0113 21:27:11.210921 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.210938 kubelet[2578]: W0113 21:27:11.210936 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.211840 kubelet[2578]: E0113 21:27:11.211056 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.211840 kubelet[2578]: E0113 21:27:11.211408 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.211840 kubelet[2578]: W0113 21:27:11.211417 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.211840 kubelet[2578]: E0113 21:27:11.211464 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.211840 kubelet[2578]: E0113 21:27:11.211652 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.211840 kubelet[2578]: W0113 21:27:11.211660 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.211840 kubelet[2578]: E0113 21:27:11.211703 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.212511 kubelet[2578]: E0113 21:27:11.211883 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.212511 kubelet[2578]: W0113 21:27:11.211890 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.212511 kubelet[2578]: E0113 21:27:11.211948 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.212511 kubelet[2578]: E0113 21:27:11.212133 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.212511 kubelet[2578]: W0113 21:27:11.212163 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.212511 kubelet[2578]: E0113 21:27:11.212279 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.212511 kubelet[2578]: I0113 21:27:11.212306 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4b709ff7-1b29-4a55-8a27-61c5d7be7f36-registration-dir\") pod \"csi-node-driver-c4drk\" (UID: \"4b709ff7-1b29-4a55-8a27-61c5d7be7f36\") " pod="calico-system/csi-node-driver-c4drk" Jan 13 21:27:11.212511 kubelet[2578]: E0113 21:27:11.212435 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.212511 kubelet[2578]: W0113 21:27:11.212442 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.213444 kubelet[2578]: E0113 21:27:11.212461 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.213444 kubelet[2578]: E0113 21:27:11.212771 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.213444 kubelet[2578]: W0113 21:27:11.212789 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.213444 kubelet[2578]: E0113 21:27:11.212804 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.213714 kubelet[2578]: E0113 21:27:11.213690 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.213714 kubelet[2578]: W0113 21:27:11.213703 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.214360 kubelet[2578]: E0113 21:27:11.213718 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.214360 kubelet[2578]: E0113 21:27:11.213942 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.214360 kubelet[2578]: W0113 21:27:11.213949 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.214360 kubelet[2578]: E0113 21:27:11.214000 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.214360 kubelet[2578]: E0113 21:27:11.214207 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.214360 kubelet[2578]: W0113 21:27:11.214214 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.214360 kubelet[2578]: E0113 21:27:11.214298 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.215004 kubelet[2578]: E0113 21:27:11.214803 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.215004 kubelet[2578]: W0113 21:27:11.214814 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.215004 kubelet[2578]: E0113 21:27:11.214911 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.215004 kubelet[2578]: I0113 21:27:11.214941 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz6w4\" (UniqueName: \"kubernetes.io/projected/4b709ff7-1b29-4a55-8a27-61c5d7be7f36-kube-api-access-cz6w4\") pod \"csi-node-driver-c4drk\" (UID: \"4b709ff7-1b29-4a55-8a27-61c5d7be7f36\") " pod="calico-system/csi-node-driver-c4drk" Jan 13 21:27:11.217219 kubelet[2578]: E0113 21:27:11.215499 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.217219 kubelet[2578]: W0113 21:27:11.215511 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.217219 kubelet[2578]: E0113 21:27:11.216245 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.217219 kubelet[2578]: E0113 21:27:11.216348 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.217219 kubelet[2578]: W0113 21:27:11.216357 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.217219 kubelet[2578]: E0113 21:27:11.216517 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.217219 kubelet[2578]: E0113 21:27:11.216804 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.217219 kubelet[2578]: W0113 21:27:11.216814 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.217219 kubelet[2578]: E0113 21:27:11.217071 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.217824 kubelet[2578]: E0113 21:27:11.217741 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.217824 kubelet[2578]: W0113 21:27:11.217754 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.217824 kubelet[2578]: E0113 21:27:11.217769 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.218317 kubelet[2578]: E0113 21:27:11.218297 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.218317 kubelet[2578]: W0113 21:27:11.218310 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.218405 kubelet[2578]: E0113 21:27:11.218326 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.228411 systemd[1]: Started cri-containerd-4c3eba4964bcdfced8389e35e5611616a72efb3ebcd4cb93a0865ef2417c0fbd.scope - libcontainer container 4c3eba4964bcdfced8389e35e5611616a72efb3ebcd4cb93a0865ef2417c0fbd. Jan 13 21:27:11.262579 containerd[1453]: time="2025-01-13T21:27:11.262529844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76d87cf7d4-drzsc,Uid:aefad872-491d-436f-ac04-70749d73857a,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c3eba4964bcdfced8389e35e5611616a72efb3ebcd4cb93a0865ef2417c0fbd\"" Jan 13 21:27:11.263358 kubelet[2578]: E0113 21:27:11.263323 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:11.264189 containerd[1453]: time="2025-01-13T21:27:11.264167387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:27:11.315530 kubelet[2578]: E0113 21:27:11.315496 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.315530 kubelet[2578]: W0113 21:27:11.315519 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.315674 kubelet[2578]: E0113 21:27:11.315544 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.315806 kubelet[2578]: E0113 21:27:11.315793 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.315833 kubelet[2578]: W0113 21:27:11.315807 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.315833 kubelet[2578]: E0113 21:27:11.315827 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.316060 kubelet[2578]: E0113 21:27:11.316034 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.316060 kubelet[2578]: W0113 21:27:11.316056 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.316173 kubelet[2578]: E0113 21:27:11.316081 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.316373 kubelet[2578]: E0113 21:27:11.316348 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.316373 kubelet[2578]: W0113 21:27:11.316370 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.316449 kubelet[2578]: E0113 21:27:11.316398 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.316601 kubelet[2578]: E0113 21:27:11.316585 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.316601 kubelet[2578]: W0113 21:27:11.316595 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.316681 kubelet[2578]: E0113 21:27:11.316610 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.316867 kubelet[2578]: E0113 21:27:11.316852 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.316867 kubelet[2578]: W0113 21:27:11.316865 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.316936 kubelet[2578]: E0113 21:27:11.316886 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.317097 kubelet[2578]: E0113 21:27:11.317084 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.317097 kubelet[2578]: W0113 21:27:11.317092 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.317201 kubelet[2578]: E0113 21:27:11.317114 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.317321 kubelet[2578]: E0113 21:27:11.317308 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.317321 kubelet[2578]: W0113 21:27:11.317317 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.317428 kubelet[2578]: E0113 21:27:11.317349 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.317507 kubelet[2578]: E0113 21:27:11.317495 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.317507 kubelet[2578]: W0113 21:27:11.317503 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.317605 kubelet[2578]: E0113 21:27:11.317529 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.317791 kubelet[2578]: E0113 21:27:11.317775 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.317791 kubelet[2578]: W0113 21:27:11.317790 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.317877 kubelet[2578]: E0113 21:27:11.317828 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.318008 kubelet[2578]: E0113 21:27:11.317993 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.318008 kubelet[2578]: W0113 21:27:11.318004 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.318087 kubelet[2578]: E0113 21:27:11.318050 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.318241 kubelet[2578]: E0113 21:27:11.318227 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.318241 kubelet[2578]: W0113 21:27:11.318238 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.318343 kubelet[2578]: E0113 21:27:11.318259 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.318500 kubelet[2578]: E0113 21:27:11.318481 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.318500 kubelet[2578]: W0113 21:27:11.318493 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.318576 kubelet[2578]: E0113 21:27:11.318510 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.318725 kubelet[2578]: E0113 21:27:11.318710 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.318725 kubelet[2578]: W0113 21:27:11.318722 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.318808 kubelet[2578]: E0113 21:27:11.318743 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.318958 kubelet[2578]: E0113 21:27:11.318944 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.318958 kubelet[2578]: W0113 21:27:11.318955 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.319052 kubelet[2578]: E0113 21:27:11.318973 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.319165 kubelet[2578]: E0113 21:27:11.319151 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.319165 kubelet[2578]: W0113 21:27:11.319162 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.319250 kubelet[2578]: E0113 21:27:11.319193 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.319376 kubelet[2578]: E0113 21:27:11.319354 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.319376 kubelet[2578]: W0113 21:27:11.319366 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.319444 kubelet[2578]: E0113 21:27:11.319401 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.319576 kubelet[2578]: E0113 21:27:11.319555 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.319576 kubelet[2578]: W0113 21:27:11.319566 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.319651 kubelet[2578]: E0113 21:27:11.319598 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.319765 kubelet[2578]: E0113 21:27:11.319745 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.319765 kubelet[2578]: W0113 21:27:11.319755 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.319832 kubelet[2578]: E0113 21:27:11.319784 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.319968 kubelet[2578]: E0113 21:27:11.319948 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.319968 kubelet[2578]: W0113 21:27:11.319959 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.320040 kubelet[2578]: E0113 21:27:11.319979 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.320185 kubelet[2578]: E0113 21:27:11.320164 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.320185 kubelet[2578]: W0113 21:27:11.320175 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.320245 kubelet[2578]: E0113 21:27:11.320192 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.320492 kubelet[2578]: E0113 21:27:11.320477 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.320492 kubelet[2578]: W0113 21:27:11.320488 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.320670 kubelet[2578]: E0113 21:27:11.320502 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.320705 kubelet[2578]: E0113 21:27:11.320690 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.320705 kubelet[2578]: W0113 21:27:11.320696 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.320765 kubelet[2578]: E0113 21:27:11.320711 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.320924 kubelet[2578]: E0113 21:27:11.320911 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.320924 kubelet[2578]: W0113 21:27:11.320920 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.320987 kubelet[2578]: E0113 21:27:11.320933 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.321307 kubelet[2578]: E0113 21:27:11.321291 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.321307 kubelet[2578]: W0113 21:27:11.321305 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.321375 kubelet[2578]: E0113 21:27:11.321319 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.328694 kubelet[2578]: E0113 21:27:11.328672 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:11.328694 kubelet[2578]: W0113 21:27:11.328687 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:11.328793 kubelet[2578]: E0113 21:27:11.328710 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:11.511950 kubelet[2578]: E0113 21:27:11.511921 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:11.512465 containerd[1453]: time="2025-01-13T21:27:11.512430551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8dht7,Uid:2fe4b44c-08c5-4c04-bf51-f2446167f749,Namespace:calico-system,Attempt:0,}" Jan 13 21:27:11.536157 containerd[1453]: time="2025-01-13T21:27:11.536065442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:11.536157 containerd[1453]: time="2025-01-13T21:27:11.536127969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:11.536157 containerd[1453]: time="2025-01-13T21:27:11.536142126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:11.536462 containerd[1453]: time="2025-01-13T21:27:11.536224251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:11.556416 systemd[1]: Started cri-containerd-4587c71cd4a8d4e16ac30b9fe69d10497809492e6b04727deca0314a29708866.scope - libcontainer container 4587c71cd4a8d4e16ac30b9fe69d10497809492e6b04727deca0314a29708866. Jan 13 21:27:11.578406 containerd[1453]: time="2025-01-13T21:27:11.578350020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8dht7,Uid:2fe4b44c-08c5-4c04-bf51-f2446167f749,Namespace:calico-system,Attempt:0,} returns sandbox id \"4587c71cd4a8d4e16ac30b9fe69d10497809492e6b04727deca0314a29708866\"" Jan 13 21:27:11.578967 kubelet[2578]: E0113 21:27:11.578943 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:12.659221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2687285433.mount: Deactivated successfully. Jan 13 21:27:12.668140 kubelet[2578]: E0113 21:27:12.668067 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c4drk" podUID="4b709ff7-1b29-4a55-8a27-61c5d7be7f36" Jan 13 21:27:13.176279 containerd[1453]: time="2025-01-13T21:27:13.176207760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:13.177039 containerd[1453]: time="2025-01-13T21:27:13.176977608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 13 21:27:13.178240 containerd[1453]: time="2025-01-13T21:27:13.178207913Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:13.180282 containerd[1453]: time="2025-01-13T21:27:13.180229457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:13.180844 containerd[1453]: time="2025-01-13T21:27:13.180810941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.916613418s" Jan 13 21:27:13.180885 containerd[1453]: time="2025-01-13T21:27:13.180843232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 21:27:13.191211 containerd[1453]: time="2025-01-13T21:27:13.191170232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:27:13.200902 containerd[1453]: time="2025-01-13T21:27:13.200867920Z" level=info msg="CreateContainer within sandbox \"4c3eba4964bcdfced8389e35e5611616a72efb3ebcd4cb93a0865ef2417c0fbd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:27:13.213726 containerd[1453]: time="2025-01-13T21:27:13.213678715Z" level=info msg="CreateContainer within sandbox \"4c3eba4964bcdfced8389e35e5611616a72efb3ebcd4cb93a0865ef2417c0fbd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e1f46bf31c35da48f27db399061c618bcf438e20ddb500b766be9640ada4aee7\"" Jan 13 21:27:13.214279 containerd[1453]: time="2025-01-13T21:27:13.214224261Z" level=info msg="StartContainer for \"e1f46bf31c35da48f27db399061c618bcf438e20ddb500b766be9640ada4aee7\"" Jan 13 21:27:13.245427 systemd[1]: Started cri-containerd-e1f46bf31c35da48f27db399061c618bcf438e20ddb500b766be9640ada4aee7.scope - libcontainer container e1f46bf31c35da48f27db399061c618bcf438e20ddb500b766be9640ada4aee7. Jan 13 21:27:13.525023 containerd[1453]: time="2025-01-13T21:27:13.524979296Z" level=info msg="StartContainer for \"e1f46bf31c35da48f27db399061c618bcf438e20ddb500b766be9640ada4aee7\" returns successfully" Jan 13 21:27:13.724021 kubelet[2578]: E0113 21:27:13.723988 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:13.735818 kubelet[2578]: I0113 21:27:13.735645 2578 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-76d87cf7d4-drzsc" podStartSLOduration=1.815932364 podStartE2EDuration="3.735602003s" podCreationTimestamp="2025-01-13 21:27:10 +0000 UTC" firstStartedPulling="2025-01-13 21:27:11.263778334 +0000 UTC m=+22.707180373" lastFinishedPulling="2025-01-13 21:27:13.183447972 +0000 UTC m=+24.626850012" observedRunningTime="2025-01-13 21:27:13.735282171 +0000 UTC m=+25.178684210" watchObservedRunningTime="2025-01-13 21:27:13.735602003 +0000 UTC m=+25.179004032" Jan 13 21:27:13.808442 kubelet[2578]: E0113 21:27:13.808321 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.808442 kubelet[2578]: W0113 21:27:13.808343 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.808442 kubelet[2578]: E0113 21:27:13.808365 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.808641 kubelet[2578]: E0113 21:27:13.808629 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.808672 kubelet[2578]: W0113 21:27:13.808646 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.808697 kubelet[2578]: E0113 21:27:13.808671 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.808936 kubelet[2578]: E0113 21:27:13.808918 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.808936 kubelet[2578]: W0113 21:27:13.808932 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.809009 kubelet[2578]: E0113 21:27:13.808946 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.809164 kubelet[2578]: E0113 21:27:13.809149 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.809164 kubelet[2578]: W0113 21:27:13.809160 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.809232 kubelet[2578]: E0113 21:27:13.809172 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.809385 kubelet[2578]: E0113 21:27:13.809370 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.809385 kubelet[2578]: W0113 21:27:13.809380 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.809448 kubelet[2578]: E0113 21:27:13.809390 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.809586 kubelet[2578]: E0113 21:27:13.809572 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.809586 kubelet[2578]: W0113 21:27:13.809582 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.809643 kubelet[2578]: E0113 21:27:13.809594 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.809787 kubelet[2578]: E0113 21:27:13.809773 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.809787 kubelet[2578]: W0113 21:27:13.809782 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.809857 kubelet[2578]: E0113 21:27:13.809793 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.809995 kubelet[2578]: E0113 21:27:13.809976 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.809995 kubelet[2578]: W0113 21:27:13.809987 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.810044 kubelet[2578]: E0113 21:27:13.809998 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.810200 kubelet[2578]: E0113 21:27:13.810182 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.810200 kubelet[2578]: W0113 21:27:13.810192 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.810200 kubelet[2578]: E0113 21:27:13.810201 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.810397 kubelet[2578]: E0113 21:27:13.810379 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.810397 kubelet[2578]: W0113 21:27:13.810389 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.810397 kubelet[2578]: E0113 21:27:13.810399 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.810584 kubelet[2578]: E0113 21:27:13.810566 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.810584 kubelet[2578]: W0113 21:27:13.810576 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.810584 kubelet[2578]: E0113 21:27:13.810586 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.810771 kubelet[2578]: E0113 21:27:13.810753 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.810771 kubelet[2578]: W0113 21:27:13.810763 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.810771 kubelet[2578]: E0113 21:27:13.810771 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.810972 kubelet[2578]: E0113 21:27:13.810937 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.810972 kubelet[2578]: W0113 21:27:13.810958 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.810972 kubelet[2578]: E0113 21:27:13.810969 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.811232 kubelet[2578]: E0113 21:27:13.811217 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.811232 kubelet[2578]: W0113 21:27:13.811228 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.811301 kubelet[2578]: E0113 21:27:13.811239 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.811462 kubelet[2578]: E0113 21:27:13.811440 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.811462 kubelet[2578]: W0113 21:27:13.811453 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.811515 kubelet[2578]: E0113 21:27:13.811464 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.833859 kubelet[2578]: E0113 21:27:13.833827 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.833859 kubelet[2578]: W0113 21:27:13.833851 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.833940 kubelet[2578]: E0113 21:27:13.833876 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.834128 kubelet[2578]: E0113 21:27:13.834113 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.834128 kubelet[2578]: W0113 21:27:13.834124 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.834188 kubelet[2578]: E0113 21:27:13.834142 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.834398 kubelet[2578]: E0113 21:27:13.834383 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.834398 kubelet[2578]: W0113 21:27:13.834394 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.834458 kubelet[2578]: E0113 21:27:13.834409 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.834656 kubelet[2578]: E0113 21:27:13.834633 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.834656 kubelet[2578]: W0113 21:27:13.834648 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.834725 kubelet[2578]: E0113 21:27:13.834667 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.834887 kubelet[2578]: E0113 21:27:13.834872 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.834887 kubelet[2578]: W0113 21:27:13.834882 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.834941 kubelet[2578]: E0113 21:27:13.834898 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.835108 kubelet[2578]: E0113 21:27:13.835093 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.835108 kubelet[2578]: W0113 21:27:13.835103 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.835173 kubelet[2578]: E0113 21:27:13.835119 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.835461 kubelet[2578]: E0113 21:27:13.835326 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.835461 kubelet[2578]: W0113 21:27:13.835338 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.835461 kubelet[2578]: E0113 21:27:13.835348 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.835961 kubelet[2578]: E0113 21:27:13.835837 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.835961 kubelet[2578]: W0113 21:27:13.835852 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.835961 kubelet[2578]: E0113 21:27:13.835875 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.836214 kubelet[2578]: E0113 21:27:13.836195 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.836214 kubelet[2578]: W0113 21:27:13.836208 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.836421 kubelet[2578]: E0113 21:27:13.836221 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.836633 kubelet[2578]: E0113 21:27:13.836537 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.836633 kubelet[2578]: W0113 21:27:13.836550 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.836633 kubelet[2578]: E0113 21:27:13.836562 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.836811 kubelet[2578]: E0113 21:27:13.836796 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.836845 kubelet[2578]: W0113 21:27:13.836812 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.836845 kubelet[2578]: E0113 21:27:13.836831 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.837108 kubelet[2578]: E0113 21:27:13.837086 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.837142 kubelet[2578]: W0113 21:27:13.837106 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.837168 kubelet[2578]: E0113 21:27:13.837147 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.837366 kubelet[2578]: E0113 21:27:13.837350 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.837366 kubelet[2578]: W0113 21:27:13.837362 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.837432 kubelet[2578]: E0113 21:27:13.837392 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.837678 kubelet[2578]: E0113 21:27:13.837648 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.837678 kubelet[2578]: W0113 21:27:13.837662 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.837845 kubelet[2578]: E0113 21:27:13.837697 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.837916 kubelet[2578]: E0113 21:27:13.837902 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.837916 kubelet[2578]: W0113 21:27:13.837913 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.837968 kubelet[2578]: E0113 21:27:13.837930 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.838146 kubelet[2578]: E0113 21:27:13.838132 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.838179 kubelet[2578]: W0113 21:27:13.838142 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.838179 kubelet[2578]: E0113 21:27:13.838164 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.838416 kubelet[2578]: E0113 21:27:13.838401 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.838416 kubelet[2578]: W0113 21:27:13.838411 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.838483 kubelet[2578]: E0113 21:27:13.838423 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:13.838817 kubelet[2578]: E0113 21:27:13.838797 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:13.838817 kubelet[2578]: W0113 21:27:13.838809 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:13.838817 kubelet[2578]: E0113 21:27:13.838819 2578 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:14.667352 kubelet[2578]: E0113 21:27:14.667301 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c4drk" podUID="4b709ff7-1b29-4a55-8a27-61c5d7be7f36" Jan 13 21:27:14.710247 containerd[1453]: time="2025-01-13T21:27:14.710189443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:14.711092 containerd[1453]: time="2025-01-13T21:27:14.711001541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 13 21:27:14.712377 containerd[1453]: time="2025-01-13T21:27:14.712344578Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:14.714606 containerd[1453]: time="2025-01-13T21:27:14.714581256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:14.715147 containerd[1453]: time="2025-01-13T21:27:14.715104119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.523899591s" Jan 13 21:27:14.715201 containerd[1453]: time="2025-01-13T21:27:14.715146039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:27:14.716763 containerd[1453]: time="2025-01-13T21:27:14.716718988Z" level=info msg="CreateContainer within sandbox \"4587c71cd4a8d4e16ac30b9fe69d10497809492e6b04727deca0314a29708866\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:27:14.724905 kubelet[2578]: I0113 21:27:14.724877 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:27:14.725472 kubelet[2578]: E0113 21:27:14.725448 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:14.731223 containerd[1453]: time="2025-01-13T21:27:14.731186142Z" level=info msg="CreateContainer within sandbox \"4587c71cd4a8d4e16ac30b9fe69d10497809492e6b04727deca0314a29708866\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9fe9e85cb667aec5089238c5e052898fdc7b6085de384d0ce1df05df95a4fd3a\"" Jan 13 21:27:14.731781 containerd[1453]: time="2025-01-13T21:27:14.731716391Z" level=info msg="StartContainer for \"9fe9e85cb667aec5089238c5e052898fdc7b6085de384d0ce1df05df95a4fd3a\"" Jan 13 21:27:14.761400 systemd[1]: Started cri-containerd-9fe9e85cb667aec5089238c5e052898fdc7b6085de384d0ce1df05df95a4fd3a.scope - libcontainer container 9fe9e85cb667aec5089238c5e052898fdc7b6085de384d0ce1df05df95a4fd3a. Jan 13 21:27:14.794311 containerd[1453]: time="2025-01-13T21:27:14.791918743Z" level=info msg="StartContainer for \"9fe9e85cb667aec5089238c5e052898fdc7b6085de384d0ce1df05df95a4fd3a\" returns successfully" Jan 13 21:27:14.803093 systemd[1]: cri-containerd-9fe9e85cb667aec5089238c5e052898fdc7b6085de384d0ce1df05df95a4fd3a.scope: Deactivated successfully. Jan 13 21:27:14.825434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fe9e85cb667aec5089238c5e052898fdc7b6085de384d0ce1df05df95a4fd3a-rootfs.mount: Deactivated successfully. Jan 13 21:27:14.869502 containerd[1453]: time="2025-01-13T21:27:14.867113771Z" level=info msg="shim disconnected" id=9fe9e85cb667aec5089238c5e052898fdc7b6085de384d0ce1df05df95a4fd3a namespace=k8s.io Jan 13 21:27:14.869502 containerd[1453]: time="2025-01-13T21:27:14.869486865Z" level=warning msg="cleaning up after shim disconnected" id=9fe9e85cb667aec5089238c5e052898fdc7b6085de384d0ce1df05df95a4fd3a namespace=k8s.io Jan 13 21:27:14.869502 containerd[1453]: time="2025-01-13T21:27:14.869503156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:15.728726 kubelet[2578]: E0113 21:27:15.728689 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:15.729687 containerd[1453]: time="2025-01-13T21:27:15.729651033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:27:16.666783 kubelet[2578]: E0113 21:27:16.666717 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c4drk" podUID="4b709ff7-1b29-4a55-8a27-61c5d7be7f36" Jan 13 21:27:18.372726 systemd[1]: Started sshd@7-10.0.0.116:22-10.0.0.1:56716.service - OpenSSH per-connection server daemon (10.0.0.1:56716). Jan 13 21:27:18.412904 sshd[3293]: Accepted publickey for core from 10.0.0.1 port 56716 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:18.414715 sshd[3293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:18.419059 systemd-logind[1437]: New session 8 of user core. Jan 13 21:27:18.425419 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:27:18.540177 sshd[3293]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:18.544035 systemd[1]: sshd@7-10.0.0.116:22-10.0.0.1:56716.service: Deactivated successfully. Jan 13 21:27:18.546126 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:27:18.546969 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:27:18.548017 systemd-logind[1437]: Removed session 8. Jan 13 21:27:18.667410 kubelet[2578]: E0113 21:27:18.667255 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c4drk" podUID="4b709ff7-1b29-4a55-8a27-61c5d7be7f36" Jan 13 21:27:20.666977 kubelet[2578]: E0113 21:27:20.666930 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c4drk" podUID="4b709ff7-1b29-4a55-8a27-61c5d7be7f36" Jan 13 21:27:21.745837 containerd[1453]: time="2025-01-13T21:27:21.745772397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:21.774035 containerd[1453]: time="2025-01-13T21:27:21.773975132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:27:21.825052 containerd[1453]: time="2025-01-13T21:27:21.824985835Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:21.873076 containerd[1453]: time="2025-01-13T21:27:21.873020768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:21.873974 containerd[1453]: time="2025-01-13T21:27:21.873949663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.144257272s" Jan 13 21:27:21.874058 containerd[1453]: time="2025-01-13T21:27:21.873976092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:27:21.875515 containerd[1453]: time="2025-01-13T21:27:21.875478045Z" level=info msg="CreateContainer within sandbox \"4587c71cd4a8d4e16ac30b9fe69d10497809492e6b04727deca0314a29708866\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:27:22.122287 containerd[1453]: time="2025-01-13T21:27:22.122223670Z" level=info msg="CreateContainer within sandbox \"4587c71cd4a8d4e16ac30b9fe69d10497809492e6b04727deca0314a29708866\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"437b8b6e434a5d9cb4e6c24f553df02d5937dda77d27b75dfb25081331c46961\"" Jan 13 21:27:22.122860 containerd[1453]: time="2025-01-13T21:27:22.122813979Z" level=info msg="StartContainer for \"437b8b6e434a5d9cb4e6c24f553df02d5937dda77d27b75dfb25081331c46961\"" Jan 13 21:27:22.154407 systemd[1]: Started cri-containerd-437b8b6e434a5d9cb4e6c24f553df02d5937dda77d27b75dfb25081331c46961.scope - libcontainer container 437b8b6e434a5d9cb4e6c24f553df02d5937dda77d27b75dfb25081331c46961. Jan 13 21:27:22.298457 containerd[1453]: time="2025-01-13T21:27:22.298405860Z" level=info msg="StartContainer for \"437b8b6e434a5d9cb4e6c24f553df02d5937dda77d27b75dfb25081331c46961\" returns successfully" Jan 13 21:27:22.667249 kubelet[2578]: E0113 21:27:22.667192 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c4drk" podUID="4b709ff7-1b29-4a55-8a27-61c5d7be7f36" Jan 13 21:27:22.742142 kubelet[2578]: E0113 21:27:22.742110 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:23.500076 systemd[1]: cri-containerd-437b8b6e434a5d9cb4e6c24f553df02d5937dda77d27b75dfb25081331c46961.scope: Deactivated successfully. Jan 13 21:27:23.503486 kubelet[2578]: I0113 21:27:23.503459 2578 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:27:23.526042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-437b8b6e434a5d9cb4e6c24f553df02d5937dda77d27b75dfb25081331c46961-rootfs.mount: Deactivated successfully. Jan 13 21:27:23.533506 kubelet[2578]: I0113 21:27:23.533426 2578 topology_manager.go:215] "Topology Admit Handler" podUID="f7353521-0488-482c-a756-367e20c4c1b4" podNamespace="kube-system" podName="coredns-76f75df574-lgbw8" Jan 13 21:27:23.533896 kubelet[2578]: I0113 21:27:23.533578 2578 topology_manager.go:215] "Topology Admit Handler" podUID="4c856975-e2ef-4c9d-acd0-da77d92975f0" podNamespace="calico-system" podName="calico-kube-controllers-b67c4f7b5-5nsgj" Jan 13 21:27:23.533896 kubelet[2578]: I0113 21:27:23.533642 2578 topology_manager.go:215] "Topology Admit Handler" podUID="fe4f7602-38f2-4964-b7eb-58611454a234" podNamespace="kube-system" podName="coredns-76f75df574-4dhwt" Jan 13 21:27:23.537296 kubelet[2578]: I0113 21:27:23.537230 2578 topology_manager.go:215] "Topology Admit Handler" podUID="4a1a44d8-4c43-4f74-8c7d-42c935a4693e" podNamespace="calico-apiserver" podName="calico-apiserver-795448cffc-7gjp9" Jan 13 21:27:23.538790 kubelet[2578]: I0113 21:27:23.538770 2578 topology_manager.go:215] "Topology Admit Handler" podUID="1575694f-4276-4e47-b4a5-e229a7267251" podNamespace="calico-apiserver" podName="calico-apiserver-795448cffc-hz6r5" Jan 13 21:27:23.563505 systemd[1]: Started sshd@8-10.0.0.116:22-10.0.0.1:56730.service - OpenSSH per-connection server daemon (10.0.0.1:56730). Jan 13 21:27:23.572044 systemd[1]: Created slice kubepods-burstable-podfe4f7602_38f2_4964_b7eb_58611454a234.slice - libcontainer container kubepods-burstable-podfe4f7602_38f2_4964_b7eb_58611454a234.slice. Jan 13 21:27:23.579851 systemd[1]: Created slice kubepods-burstable-podf7353521_0488_482c_a756_367e20c4c1b4.slice - libcontainer container kubepods-burstable-podf7353521_0488_482c_a756_367e20c4c1b4.slice. Jan 13 21:27:23.587055 systemd[1]: Created slice kubepods-besteffort-pod4c856975_e2ef_4c9d_acd0_da77d92975f0.slice - libcontainer container kubepods-besteffort-pod4c856975_e2ef_4c9d_acd0_da77d92975f0.slice. Jan 13 21:27:23.592567 systemd[1]: Created slice kubepods-besteffort-pod1575694f_4276_4e47_b4a5_e229a7267251.slice - libcontainer container kubepods-besteffort-pod1575694f_4276_4e47_b4a5_e229a7267251.slice. Jan 13 21:27:23.598477 systemd[1]: Created slice kubepods-besteffort-pod4a1a44d8_4c43_4f74_8c7d_42c935a4693e.slice - libcontainer container kubepods-besteffort-pod4a1a44d8_4c43_4f74_8c7d_42c935a4693e.slice. Jan 13 21:27:23.602612 kubelet[2578]: I0113 21:27:23.602561 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7353521-0488-482c-a756-367e20c4c1b4-config-volume\") pod \"coredns-76f75df574-lgbw8\" (UID: \"f7353521-0488-482c-a756-367e20c4c1b4\") " pod="kube-system/coredns-76f75df574-lgbw8" Jan 13 21:27:23.602730 kubelet[2578]: I0113 21:27:23.602638 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9sd5\" (UniqueName: \"kubernetes.io/projected/4c856975-e2ef-4c9d-acd0-da77d92975f0-kube-api-access-k9sd5\") pod \"calico-kube-controllers-b67c4f7b5-5nsgj\" (UID: \"4c856975-e2ef-4c9d-acd0-da77d92975f0\") " pod="calico-system/calico-kube-controllers-b67c4f7b5-5nsgj" Jan 13 21:27:23.602730 kubelet[2578]: I0113 21:27:23.602673 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwfk9\" (UniqueName: \"kubernetes.io/projected/1575694f-4276-4e47-b4a5-e229a7267251-kube-api-access-kwfk9\") pod \"calico-apiserver-795448cffc-hz6r5\" (UID: \"1575694f-4276-4e47-b4a5-e229a7267251\") " pod="calico-apiserver/calico-apiserver-795448cffc-hz6r5" Jan 13 21:27:23.602779 kubelet[2578]: I0113 21:27:23.602744 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c856975-e2ef-4c9d-acd0-da77d92975f0-tigera-ca-bundle\") pod \"calico-kube-controllers-b67c4f7b5-5nsgj\" (UID: \"4c856975-e2ef-4c9d-acd0-da77d92975f0\") " pod="calico-system/calico-kube-controllers-b67c4f7b5-5nsgj" Jan 13 21:27:23.602779 kubelet[2578]: I0113 21:27:23.602775 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs4kc\" (UniqueName: \"kubernetes.io/projected/4a1a44d8-4c43-4f74-8c7d-42c935a4693e-kube-api-access-gs4kc\") pod \"calico-apiserver-795448cffc-7gjp9\" (UID: \"4a1a44d8-4c43-4f74-8c7d-42c935a4693e\") " pod="calico-apiserver/calico-apiserver-795448cffc-7gjp9" Jan 13 21:27:23.602880 kubelet[2578]: I0113 21:27:23.602835 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c8j2\" (UniqueName: \"kubernetes.io/projected/fe4f7602-38f2-4964-b7eb-58611454a234-kube-api-access-9c8j2\") pod \"coredns-76f75df574-4dhwt\" (UID: \"fe4f7602-38f2-4964-b7eb-58611454a234\") " pod="kube-system/coredns-76f75df574-4dhwt" Jan 13 21:27:23.602952 kubelet[2578]: I0113 21:27:23.602926 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1575694f-4276-4e47-b4a5-e229a7267251-calico-apiserver-certs\") pod \"calico-apiserver-795448cffc-hz6r5\" (UID: \"1575694f-4276-4e47-b4a5-e229a7267251\") " pod="calico-apiserver/calico-apiserver-795448cffc-hz6r5" Jan 13 21:27:23.602986 kubelet[2578]: I0113 21:27:23.602974 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6qvv\" (UniqueName: \"kubernetes.io/projected/f7353521-0488-482c-a756-367e20c4c1b4-kube-api-access-k6qvv\") pod \"coredns-76f75df574-lgbw8\" (UID: \"f7353521-0488-482c-a756-367e20c4c1b4\") " pod="kube-system/coredns-76f75df574-lgbw8" Jan 13 21:27:23.603035 kubelet[2578]: I0113 21:27:23.603007 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4a1a44d8-4c43-4f74-8c7d-42c935a4693e-calico-apiserver-certs\") pod \"calico-apiserver-795448cffc-7gjp9\" (UID: \"4a1a44d8-4c43-4f74-8c7d-42c935a4693e\") " pod="calico-apiserver/calico-apiserver-795448cffc-7gjp9" Jan 13 21:27:23.603088 kubelet[2578]: I0113 21:27:23.603054 2578 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe4f7602-38f2-4964-b7eb-58611454a234-config-volume\") pod \"coredns-76f75df574-4dhwt\" (UID: \"fe4f7602-38f2-4964-b7eb-58611454a234\") " pod="kube-system/coredns-76f75df574-4dhwt" Jan 13 21:27:23.743664 kubelet[2578]: E0113 21:27:23.743615 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:23.820306 sshd[3368]: Accepted publickey for core from 10.0.0.1 port 56730 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:23.822043 sshd[3368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:23.825927 systemd-logind[1437]: New session 9 of user core. Jan 13 21:27:23.830955 containerd[1453]: time="2025-01-13T21:27:23.830885522Z" level=info msg="shim disconnected" id=437b8b6e434a5d9cb4e6c24f553df02d5937dda77d27b75dfb25081331c46961 namespace=k8s.io Jan 13 21:27:23.830955 containerd[1453]: time="2025-01-13T21:27:23.830952308Z" level=warning msg="cleaning up after shim disconnected" id=437b8b6e434a5d9cb4e6c24f553df02d5937dda77d27b75dfb25081331c46961 namespace=k8s.io Jan 13 21:27:23.835478 containerd[1453]: time="2025-01-13T21:27:23.830961775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:23.835424 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:27:23.954524 sshd[3368]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:23.961198 systemd[1]: sshd@8-10.0.0.116:22-10.0.0.1:56730.service: Deactivated successfully. Jan 13 21:27:23.963362 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:27:23.964089 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:27:23.965029 systemd-logind[1437]: Removed session 9. Jan 13 21:27:24.082698 kubelet[2578]: E0113 21:27:24.082575 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:24.083444 containerd[1453]: time="2025-01-13T21:27:24.083341897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4dhwt,Uid:fe4f7602-38f2-4964-b7eb-58611454a234,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:24.086876 containerd[1453]: time="2025-01-13T21:27:24.086804591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795448cffc-hz6r5,Uid:1575694f-4276-4e47-b4a5-e229a7267251,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:27:24.087443 containerd[1453]: time="2025-01-13T21:27:24.087255087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b67c4f7b5-5nsgj,Uid:4c856975-e2ef-4c9d-acd0-da77d92975f0,Namespace:calico-system,Attempt:0,}" Jan 13 21:27:24.087511 containerd[1453]: time="2025-01-13T21:27:24.087481012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795448cffc-7gjp9,Uid:4a1a44d8-4c43-4f74-8c7d-42c935a4693e,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:27:24.089152 kubelet[2578]: E0113 21:27:24.089119 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:24.089557 containerd[1453]: time="2025-01-13T21:27:24.089521555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lgbw8,Uid:f7353521-0488-482c-a756-367e20c4c1b4,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:24.215846 containerd[1453]: time="2025-01-13T21:27:24.215760926Z" level=error msg="Failed to destroy network for sandbox \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.216979 containerd[1453]: time="2025-01-13T21:27:24.216519261Z" level=error msg="encountered an error cleaning up failed sandbox \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.216979 containerd[1453]: time="2025-01-13T21:27:24.216577981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4dhwt,Uid:fe4f7602-38f2-4964-b7eb-58611454a234,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.217061 kubelet[2578]: E0113 21:27:24.216837 2578 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.217061 kubelet[2578]: E0113 21:27:24.216899 2578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4dhwt" Jan 13 21:27:24.217061 kubelet[2578]: E0113 21:27:24.216922 2578 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4dhwt" Jan 13 21:27:24.217197 kubelet[2578]: E0113 21:27:24.216980 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4dhwt_kube-system(fe4f7602-38f2-4964-b7eb-58611454a234)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4dhwt_kube-system(fe4f7602-38f2-4964-b7eb-58611454a234)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4dhwt" podUID="fe4f7602-38f2-4964-b7eb-58611454a234" Jan 13 21:27:24.223317 containerd[1453]: time="2025-01-13T21:27:24.222868948Z" level=error msg="Failed to destroy network for sandbox \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.223472 containerd[1453]: time="2025-01-13T21:27:24.223328831Z" level=error msg="encountered an error cleaning up failed sandbox \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.223472 containerd[1453]: time="2025-01-13T21:27:24.223375049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795448cffc-hz6r5,Uid:1575694f-4276-4e47-b4a5-e229a7267251,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.223574 kubelet[2578]: E0113 21:27:24.223525 2578 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.223574 kubelet[2578]: E0113 21:27:24.223561 2578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-795448cffc-hz6r5" Jan 13 21:27:24.223643 kubelet[2578]: E0113 21:27:24.223580 2578 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-795448cffc-hz6r5" Jan 13 21:27:24.223643 kubelet[2578]: E0113 21:27:24.223622 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-795448cffc-hz6r5_calico-apiserver(1575694f-4276-4e47-b4a5-e229a7267251)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-795448cffc-hz6r5_calico-apiserver(1575694f-4276-4e47-b4a5-e229a7267251)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-795448cffc-hz6r5" podUID="1575694f-4276-4e47-b4a5-e229a7267251" Jan 13 21:27:24.229944 containerd[1453]: time="2025-01-13T21:27:24.229903532Z" level=error msg="Failed to destroy network for sandbox \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.230506 containerd[1453]: time="2025-01-13T21:27:24.230469114Z" level=error msg="encountered an error cleaning up failed sandbox \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.230506 containerd[1453]: time="2025-01-13T21:27:24.230514699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b67c4f7b5-5nsgj,Uid:4c856975-e2ef-4c9d-acd0-da77d92975f0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.230812 kubelet[2578]: E0113 21:27:24.230669 2578 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.230812 kubelet[2578]: E0113 21:27:24.230733 2578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b67c4f7b5-5nsgj" Jan 13 21:27:24.230812 kubelet[2578]: E0113 21:27:24.230753 2578 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b67c4f7b5-5nsgj" Jan 13 21:27:24.230925 kubelet[2578]: E0113 21:27:24.230807 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b67c4f7b5-5nsgj_calico-system(4c856975-e2ef-4c9d-acd0-da77d92975f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b67c4f7b5-5nsgj_calico-system(4c856975-e2ef-4c9d-acd0-da77d92975f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b67c4f7b5-5nsgj" podUID="4c856975-e2ef-4c9d-acd0-da77d92975f0" Jan 13 21:27:24.241339 containerd[1453]: time="2025-01-13T21:27:24.241273350Z" level=error msg="Failed to destroy network for sandbox \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.241764 containerd[1453]: time="2025-01-13T21:27:24.241718676Z" level=error msg="encountered an error cleaning up failed sandbox \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.241843 containerd[1453]: time="2025-01-13T21:27:24.241788437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795448cffc-7gjp9,Uid:4a1a44d8-4c43-4f74-8c7d-42c935a4693e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.242094 kubelet[2578]: E0113 21:27:24.242062 2578 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.242094 kubelet[2578]: E0113 21:27:24.242100 2578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-795448cffc-7gjp9" Jan 13 21:27:24.242277 kubelet[2578]: E0113 21:27:24.242120 2578 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-795448cffc-7gjp9" Jan 13 21:27:24.242277 kubelet[2578]: E0113 21:27:24.242177 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-795448cffc-7gjp9_calico-apiserver(4a1a44d8-4c43-4f74-8c7d-42c935a4693e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-795448cffc-7gjp9_calico-apiserver(4a1a44d8-4c43-4f74-8c7d-42c935a4693e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-795448cffc-7gjp9" podUID="4a1a44d8-4c43-4f74-8c7d-42c935a4693e" Jan 13 21:27:24.245929 containerd[1453]: time="2025-01-13T21:27:24.245887938Z" level=error msg="Failed to destroy network for sandbox \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.246289 containerd[1453]: time="2025-01-13T21:27:24.246245008Z" level=error msg="encountered an error cleaning up failed sandbox \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.246327 containerd[1453]: time="2025-01-13T21:27:24.246308207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lgbw8,Uid:f7353521-0488-482c-a756-367e20c4c1b4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.246559 kubelet[2578]: E0113 21:27:24.246512 2578 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.246615 kubelet[2578]: E0113 21:27:24.246592 2578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-lgbw8" Jan 13 21:27:24.246648 kubelet[2578]: E0113 21:27:24.246616 2578 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-lgbw8" Jan 13 21:27:24.246707 kubelet[2578]: E0113 21:27:24.246693 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-lgbw8_kube-system(f7353521-0488-482c-a756-367e20c4c1b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-lgbw8_kube-system(f7353521-0488-482c-a756-367e20c4c1b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-lgbw8" podUID="f7353521-0488-482c-a756-367e20c4c1b4" Jan 13 21:27:24.674220 systemd[1]: Created slice kubepods-besteffort-pod4b709ff7_1b29_4a55_8a27_61c5d7be7f36.slice - libcontainer container kubepods-besteffort-pod4b709ff7_1b29_4a55_8a27_61c5d7be7f36.slice. Jan 13 21:27:24.676824 containerd[1453]: time="2025-01-13T21:27:24.676762893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c4drk,Uid:4b709ff7-1b29-4a55-8a27-61c5d7be7f36,Namespace:calico-system,Attempt:0,}" Jan 13 21:27:24.745173 containerd[1453]: time="2025-01-13T21:27:24.745120819Z" level=error msg="Failed to destroy network for sandbox \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.747705 containerd[1453]: time="2025-01-13T21:27:24.747650190Z" level=error msg="encountered an error cleaning up failed sandbox \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.747772 containerd[1453]: time="2025-01-13T21:27:24.747732675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c4drk,Uid:4b709ff7-1b29-4a55-8a27-61c5d7be7f36,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.748517 kubelet[2578]: E0113 21:27:24.747969 2578 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.748517 kubelet[2578]: E0113 21:27:24.748029 2578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c4drk" Jan 13 21:27:24.748517 kubelet[2578]: E0113 21:27:24.748051 2578 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c4drk" Jan 13 21:27:24.749134 kubelet[2578]: E0113 21:27:24.748119 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c4drk_calico-system(4b709ff7-1b29-4a55-8a27-61c5d7be7f36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c4drk_calico-system(4b709ff7-1b29-4a55-8a27-61c5d7be7f36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c4drk" podUID="4b709ff7-1b29-4a55-8a27-61c5d7be7f36" Jan 13 21:27:24.749134 kubelet[2578]: I0113 21:27:24.748190 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:24.749290 containerd[1453]: time="2025-01-13T21:27:24.748768350Z" level=info msg="StopPodSandbox for \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\"" Jan 13 21:27:24.749290 containerd[1453]: time="2025-01-13T21:27:24.749005506Z" level=info msg="Ensure that sandbox a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0 in task-service has been cleanup successfully" Jan 13 21:27:24.749465 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac-shm.mount: Deactivated successfully. Jan 13 21:27:24.750539 kubelet[2578]: I0113 21:27:24.749956 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:24.751350 containerd[1453]: time="2025-01-13T21:27:24.750855521Z" level=info msg="StopPodSandbox for \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\"" Jan 13 21:27:24.751350 containerd[1453]: time="2025-01-13T21:27:24.751053032Z" level=info msg="Ensure that sandbox 76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50 in task-service has been cleanup successfully" Jan 13 21:27:24.752252 kubelet[2578]: I0113 21:27:24.752230 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:24.752938 containerd[1453]: time="2025-01-13T21:27:24.752895713Z" level=info msg="StopPodSandbox for \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\"" Jan 13 21:27:24.753923 containerd[1453]: time="2025-01-13T21:27:24.753890131Z" level=info msg="Ensure that sandbox b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8 in task-service has been cleanup successfully" Jan 13 21:27:24.757511 kubelet[2578]: E0113 21:27:24.757480 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:24.758950 containerd[1453]: time="2025-01-13T21:27:24.758660362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:27:24.759570 kubelet[2578]: I0113 21:27:24.759195 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:24.759858 containerd[1453]: time="2025-01-13T21:27:24.759790264Z" level=info msg="StopPodSandbox for \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\"" Jan 13 21:27:24.760691 containerd[1453]: time="2025-01-13T21:27:24.760655439Z" level=info msg="Ensure that sandbox 65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505 in task-service has been cleanup successfully" Jan 13 21:27:24.765444 kubelet[2578]: I0113 21:27:24.765413 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:24.767542 containerd[1453]: time="2025-01-13T21:27:24.767485629Z" level=info msg="StopPodSandbox for \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\"" Jan 13 21:27:24.768027 containerd[1453]: time="2025-01-13T21:27:24.768001237Z" level=info msg="Ensure that sandbox 645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097 in task-service has been cleanup successfully" Jan 13 21:27:24.811527 containerd[1453]: time="2025-01-13T21:27:24.811469510Z" level=error msg="StopPodSandbox for \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\" failed" error="failed to destroy network for sandbox \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.812298 kubelet[2578]: E0113 21:27:24.812048 2578 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:24.812298 kubelet[2578]: E0113 21:27:24.812151 2578 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8"} Jan 13 21:27:24.812298 kubelet[2578]: E0113 21:27:24.812205 2578 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c856975-e2ef-4c9d-acd0-da77d92975f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:27:24.812298 kubelet[2578]: E0113 21:27:24.812247 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c856975-e2ef-4c9d-acd0-da77d92975f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b67c4f7b5-5nsgj" podUID="4c856975-e2ef-4c9d-acd0-da77d92975f0" Jan 13 21:27:24.816712 containerd[1453]: time="2025-01-13T21:27:24.816589177Z" level=error msg="StopPodSandbox for \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\" failed" error="failed to destroy network for sandbox \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.816891 kubelet[2578]: E0113 21:27:24.816845 2578 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:24.816990 kubelet[2578]: E0113 21:27:24.816901 2578 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0"} Jan 13 21:27:24.816990 kubelet[2578]: E0113 21:27:24.816936 2578 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe4f7602-38f2-4964-b7eb-58611454a234\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:27:24.816990 kubelet[2578]: E0113 21:27:24.816966 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe4f7602-38f2-4964-b7eb-58611454a234\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4dhwt" podUID="fe4f7602-38f2-4964-b7eb-58611454a234" Jan 13 21:27:24.821684 containerd[1453]: time="2025-01-13T21:27:24.821630767Z" level=error msg="StopPodSandbox for \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\" failed" error="failed to destroy network for sandbox \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.821869 kubelet[2578]: E0113 21:27:24.821844 2578 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:24.821966 kubelet[2578]: E0113 21:27:24.821877 2578 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097"} Jan 13 21:27:24.821966 kubelet[2578]: E0113 21:27:24.821909 2578 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1575694f-4276-4e47-b4a5-e229a7267251\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:27:24.821966 kubelet[2578]: E0113 21:27:24.821935 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1575694f-4276-4e47-b4a5-e229a7267251\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-795448cffc-hz6r5" podUID="1575694f-4276-4e47-b4a5-e229a7267251" Jan 13 21:27:24.823868 containerd[1453]: time="2025-01-13T21:27:24.823817515Z" level=error msg="StopPodSandbox for \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\" failed" error="failed to destroy network for sandbox \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.824106 kubelet[2578]: E0113 21:27:24.824072 2578 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:24.824106 kubelet[2578]: E0113 21:27:24.824105 2578 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50"} Jan 13 21:27:24.824182 kubelet[2578]: E0113 21:27:24.824135 2578 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a1a44d8-4c43-4f74-8c7d-42c935a4693e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:27:24.824182 kubelet[2578]: E0113 21:27:24.824156 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a1a44d8-4c43-4f74-8c7d-42c935a4693e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-795448cffc-7gjp9" podUID="4a1a44d8-4c43-4f74-8c7d-42c935a4693e" Jan 13 21:27:24.827652 containerd[1453]: time="2025-01-13T21:27:24.827606541Z" level=error msg="StopPodSandbox for \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\" failed" error="failed to destroy network for sandbox \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:24.827912 kubelet[2578]: E0113 21:27:24.827876 2578 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:24.827974 kubelet[2578]: E0113 21:27:24.827936 2578 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505"} Jan 13 21:27:24.828006 kubelet[2578]: E0113 21:27:24.827976 2578 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7353521-0488-482c-a756-367e20c4c1b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:27:24.828141 kubelet[2578]: E0113 21:27:24.828015 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7353521-0488-482c-a756-367e20c4c1b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-lgbw8" podUID="f7353521-0488-482c-a756-367e20c4c1b4" Jan 13 21:27:25.768625 kubelet[2578]: I0113 21:27:25.768575 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:25.769250 containerd[1453]: time="2025-01-13T21:27:25.769202561Z" level=info msg="StopPodSandbox for \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\"" Jan 13 21:27:25.770239 containerd[1453]: time="2025-01-13T21:27:25.769930899Z" level=info msg="Ensure that sandbox 6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac in task-service has been cleanup successfully" Jan 13 21:27:25.799579 containerd[1453]: time="2025-01-13T21:27:25.799463959Z" level=error msg="StopPodSandbox for \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\" failed" error="failed to destroy network for sandbox \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:25.800060 kubelet[2578]: E0113 21:27:25.800034 2578 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:25.800121 kubelet[2578]: E0113 21:27:25.800089 2578 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac"} Jan 13 21:27:25.800149 kubelet[2578]: E0113 21:27:25.800126 2578 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b709ff7-1b29-4a55-8a27-61c5d7be7f36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:27:25.800226 kubelet[2578]: E0113 21:27:25.800157 2578 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b709ff7-1b29-4a55-8a27-61c5d7be7f36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c4drk" podUID="4b709ff7-1b29-4a55-8a27-61c5d7be7f36" Jan 13 21:27:28.979858 systemd[1]: Started sshd@9-10.0.0.116:22-10.0.0.1:34264.service - OpenSSH per-connection server daemon (10.0.0.1:34264). Jan 13 21:27:29.022971 sshd[3776]: Accepted publickey for core from 10.0.0.1 port 34264 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:29.024426 sshd[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:29.029812 systemd-logind[1437]: New session 10 of user core. Jan 13 21:27:29.037493 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:27:29.188724 sshd[3776]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:29.195123 systemd[1]: sshd@9-10.0.0.116:22-10.0.0.1:34264.service: Deactivated successfully. Jan 13 21:27:29.198124 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:27:29.199223 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:27:29.201496 systemd-logind[1437]: Removed session 10. Jan 13 21:27:29.590104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2302278566.mount: Deactivated successfully. Jan 13 21:27:31.881560 containerd[1453]: time="2025-01-13T21:27:31.881482998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:31.929474 containerd[1453]: time="2025-01-13T21:27:31.929409962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:27:31.962279 containerd[1453]: time="2025-01-13T21:27:31.962229313Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:31.978223 containerd[1453]: time="2025-01-13T21:27:31.978174959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:31.979034 containerd[1453]: time="2025-01-13T21:27:31.978984428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.220268141s" Jan 13 21:27:31.979034 containerd[1453]: time="2025-01-13T21:27:31.979032919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:27:31.987825 containerd[1453]: time="2025-01-13T21:27:31.987647902Z" level=info msg="CreateContainer within sandbox \"4587c71cd4a8d4e16ac30b9fe69d10497809492e6b04727deca0314a29708866\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:27:32.007021 containerd[1453]: time="2025-01-13T21:27:32.006975225Z" level=info msg="CreateContainer within sandbox \"4587c71cd4a8d4e16ac30b9fe69d10497809492e6b04727deca0314a29708866\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b4d553c5c45dcee32aba91204381895fecfe02806158389f3edc14ed4bf08c02\"" Jan 13 21:27:32.007687 containerd[1453]: time="2025-01-13T21:27:32.007610828Z" level=info msg="StartContainer for \"b4d553c5c45dcee32aba91204381895fecfe02806158389f3edc14ed4bf08c02\"" Jan 13 21:27:32.082406 systemd[1]: Started cri-containerd-b4d553c5c45dcee32aba91204381895fecfe02806158389f3edc14ed4bf08c02.scope - libcontainer container b4d553c5c45dcee32aba91204381895fecfe02806158389f3edc14ed4bf08c02. Jan 13 21:27:32.208680 containerd[1453]: time="2025-01-13T21:27:32.208511707Z" level=info msg="StartContainer for \"b4d553c5c45dcee32aba91204381895fecfe02806158389f3edc14ed4bf08c02\" returns successfully" Jan 13 21:27:32.238170 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:27:32.238325 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:27:32.784448 kubelet[2578]: E0113 21:27:32.784419 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:33.786061 kubelet[2578]: E0113 21:27:33.786015 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:33.805487 systemd[1]: run-containerd-runc-k8s.io-b4d553c5c45dcee32aba91204381895fecfe02806158389f3edc14ed4bf08c02-runc.3kZYiU.mount: Deactivated successfully. Jan 13 21:27:34.202046 systemd[1]: Started sshd@10-10.0.0.116:22-10.0.0.1:34278.service - OpenSSH per-connection server daemon (10.0.0.1:34278). Jan 13 21:27:34.244370 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 34278 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:34.246131 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:34.250052 systemd-logind[1437]: New session 11 of user core. Jan 13 21:27:34.261379 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:27:34.374629 sshd[4013]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:34.384420 systemd[1]: sshd@10-10.0.0.116:22-10.0.0.1:34278.service: Deactivated successfully. Jan 13 21:27:34.386447 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:27:34.388033 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:27:34.393549 systemd[1]: Started sshd@11-10.0.0.116:22-10.0.0.1:34284.service - OpenSSH per-connection server daemon (10.0.0.1:34284). Jan 13 21:27:34.394380 systemd-logind[1437]: Removed session 11. Jan 13 21:27:34.423015 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 34284 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:34.424572 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:34.428430 systemd-logind[1437]: New session 12 of user core. Jan 13 21:27:34.441424 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:27:34.721371 sshd[4028]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:34.728175 systemd[1]: sshd@11-10.0.0.116:22-10.0.0.1:34284.service: Deactivated successfully. Jan 13 21:27:34.730096 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:27:34.730717 systemd-logind[1437]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:27:34.736556 systemd[1]: Started sshd@12-10.0.0.116:22-10.0.0.1:34290.service - OpenSSH per-connection server daemon (10.0.0.1:34290). Jan 13 21:27:34.737470 systemd-logind[1437]: Removed session 12. Jan 13 21:27:34.765752 sshd[4066]: Accepted publickey for core from 10.0.0.1 port 34290 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:34.767156 sshd[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:34.770884 systemd-logind[1437]: New session 13 of user core. Jan 13 21:27:34.777391 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:27:34.890143 sshd[4066]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:34.893861 systemd[1]: sshd@12-10.0.0.116:22-10.0.0.1:34290.service: Deactivated successfully. Jan 13 21:27:34.895872 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:27:34.896488 systemd-logind[1437]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:27:34.897365 systemd-logind[1437]: Removed session 13. Jan 13 21:27:36.667999 containerd[1453]: time="2025-01-13T21:27:36.667923833Z" level=info msg="StopPodSandbox for \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\"" Jan 13 21:27:36.788329 kubelet[2578]: I0113 21:27:36.787222 2578 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-8dht7" podStartSLOduration=6.387450536 podStartE2EDuration="26.78718017s" podCreationTimestamp="2025-01-13 21:27:10 +0000 UTC" firstStartedPulling="2025-01-13 21:27:11.579595164 +0000 UTC m=+23.022997203" lastFinishedPulling="2025-01-13 21:27:31.979324807 +0000 UTC m=+43.422726837" observedRunningTime="2025-01-13 21:27:32.838056862 +0000 UTC m=+44.281458911" watchObservedRunningTime="2025-01-13 21:27:36.78718017 +0000 UTC m=+48.230582209" Jan 13 21:27:36.876378 containerd[1453]: 2025-01-13 21:27:36.787 [INFO][4122] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:36.876378 containerd[1453]: 2025-01-13 21:27:36.790 [INFO][4122] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" iface="eth0" netns="/var/run/netns/cni-83f96526-24a2-979d-502c-ba9dfd869cc7" Jan 13 21:27:36.876378 containerd[1453]: 2025-01-13 21:27:36.791 [INFO][4122] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" iface="eth0" netns="/var/run/netns/cni-83f96526-24a2-979d-502c-ba9dfd869cc7" Jan 13 21:27:36.876378 containerd[1453]: 2025-01-13 21:27:36.793 [INFO][4122] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" iface="eth0" netns="/var/run/netns/cni-83f96526-24a2-979d-502c-ba9dfd869cc7" Jan 13 21:27:36.876378 containerd[1453]: 2025-01-13 21:27:36.793 [INFO][4122] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:36.876378 containerd[1453]: 2025-01-13 21:27:36.793 [INFO][4122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:36.876378 containerd[1453]: 2025-01-13 21:27:36.862 [INFO][4142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" HandleID="k8s-pod-network.645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Workload="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:36.876378 containerd[1453]: 2025-01-13 21:27:36.862 [INFO][4142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:36.876378 containerd[1453]: 2025-01-13 21:27:36.862 [INFO][4142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:36.876378 containerd[1453]: 2025-01-13 21:27:36.868 [WARNING][4142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" HandleID="k8s-pod-network.645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Workload="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:36.876378 containerd[1453]: 2025-01-13 21:27:36.868 [INFO][4142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" HandleID="k8s-pod-network.645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Workload="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:36.876378 containerd[1453]: 2025-01-13 21:27:36.870 [INFO][4142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:36.876378 containerd[1453]: 2025-01-13 21:27:36.873 [INFO][4122] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:36.876773 containerd[1453]: time="2025-01-13T21:27:36.876609487Z" level=info msg="TearDown network for sandbox \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\" successfully" Jan 13 21:27:36.876773 containerd[1453]: time="2025-01-13T21:27:36.876654212Z" level=info msg="StopPodSandbox for \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\" returns successfully" Jan 13 21:27:36.877750 containerd[1453]: time="2025-01-13T21:27:36.877697871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795448cffc-hz6r5,Uid:1575694f-4276-4e47-b4a5-e229a7267251,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:27:36.879148 systemd[1]: run-netns-cni\x2d83f96526\x2d24a2\x2d979d\x2d502c\x2dba9dfd869cc7.mount: Deactivated successfully. Jan 13 21:27:37.576248 systemd-networkd[1394]: cali7ee35b0acc0: Link UP Jan 13 21:27:37.576682 systemd-networkd[1394]: cali7ee35b0acc0: Gained carrier Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.478 [INFO][4159] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.488 [INFO][4159] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0 calico-apiserver-795448cffc- calico-apiserver 1575694f-4276-4e47-b4a5-e229a7267251 909 0 2025-01-13 21:27:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:795448cffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-795448cffc-hz6r5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7ee35b0acc0 [] []}} ContainerID="f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-hz6r5" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--hz6r5-" Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.488 [INFO][4159] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-hz6r5" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.519 [INFO][4174] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" HandleID="k8s-pod-network.f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" Workload="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.526 [INFO][4174] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" HandleID="k8s-pod-network.f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" Workload="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050d10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-795448cffc-hz6r5", "timestamp":"2025-01-13 21:27:37.519111762 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.526 [INFO][4174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.526 [INFO][4174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.526 [INFO][4174] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.528 [INFO][4174] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" host="localhost" Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.531 [INFO][4174] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.534 [INFO][4174] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.535 [INFO][4174] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.537 [INFO][4174] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.537 [INFO][4174] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" host="localhost" Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.538 [INFO][4174] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436 Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.546 [INFO][4174] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" host="localhost" Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.566 [INFO][4174] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" host="localhost" Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.566 [INFO][4174] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" host="localhost" Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.566 [INFO][4174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:37.733770 containerd[1453]: 2025-01-13 21:27:37.566 [INFO][4174] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" HandleID="k8s-pod-network.f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" Workload="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:37.734850 containerd[1453]: 2025-01-13 21:27:37.569 [INFO][4159] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-hz6r5" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0", GenerateName:"calico-apiserver-795448cffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1575694f-4276-4e47-b4a5-e229a7267251", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795448cffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-795448cffc-hz6r5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ee35b0acc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:37.734850 containerd[1453]: 2025-01-13 21:27:37.569 [INFO][4159] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-hz6r5" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:37.734850 containerd[1453]: 2025-01-13 21:27:37.569 [INFO][4159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ee35b0acc0 ContainerID="f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-hz6r5" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:37.734850 containerd[1453]: 2025-01-13 21:27:37.577 [INFO][4159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-hz6r5" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:37.734850 containerd[1453]: 2025-01-13 21:27:37.577 [INFO][4159] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-hz6r5" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0", GenerateName:"calico-apiserver-795448cffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1575694f-4276-4e47-b4a5-e229a7267251", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795448cffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436", Pod:"calico-apiserver-795448cffc-hz6r5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ee35b0acc0", MAC:"ae:8c:88:05:10:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:37.734850 containerd[1453]: 2025-01-13 21:27:37.730 [INFO][4159] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-hz6r5" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:37.768109 containerd[1453]: time="2025-01-13T21:27:37.767998974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:37.768109 containerd[1453]: time="2025-01-13T21:27:37.768082360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:37.768403 containerd[1453]: time="2025-01-13T21:27:37.768096857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:37.768479 containerd[1453]: time="2025-01-13T21:27:37.768379478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:37.799605 systemd[1]: Started cri-containerd-f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436.scope - libcontainer container f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436. Jan 13 21:27:37.811283 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:27:37.837340 containerd[1453]: time="2025-01-13T21:27:37.836852881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795448cffc-hz6r5,Uid:1575694f-4276-4e47-b4a5-e229a7267251,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436\"" Jan 13 21:27:37.839392 containerd[1453]: time="2025-01-13T21:27:37.839352221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:27:37.882765 systemd[1]: run-containerd-runc-k8s.io-f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436-runc.ZCUuc8.mount: Deactivated successfully. Jan 13 21:27:38.668842 containerd[1453]: time="2025-01-13T21:27:38.667934481Z" level=info msg="StopPodSandbox for \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\"" Jan 13 21:27:38.668842 containerd[1453]: time="2025-01-13T21:27:38.668161047Z" level=info msg="StopPodSandbox for \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\"" Jan 13 21:27:38.668842 containerd[1453]: time="2025-01-13T21:27:38.668562169Z" level=info msg="StopPodSandbox for \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\"" Jan 13 21:27:38.668842 containerd[1453]: time="2025-01-13T21:27:38.668674860Z" level=info msg="StopPodSandbox for \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\"" Jan 13 21:27:38.775245 containerd[1453]: 2025-01-13 21:27:38.731 [INFO][4319] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:38.775245 containerd[1453]: 2025-01-13 21:27:38.731 [INFO][4319] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" iface="eth0" netns="/var/run/netns/cni-17d65239-3b6f-481e-d913-caba4ab78019" Jan 13 21:27:38.775245 containerd[1453]: 2025-01-13 21:27:38.732 [INFO][4319] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" iface="eth0" netns="/var/run/netns/cni-17d65239-3b6f-481e-d913-caba4ab78019" Jan 13 21:27:38.775245 containerd[1453]: 2025-01-13 21:27:38.737 [INFO][4319] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" iface="eth0" netns="/var/run/netns/cni-17d65239-3b6f-481e-d913-caba4ab78019" Jan 13 21:27:38.775245 containerd[1453]: 2025-01-13 21:27:38.737 [INFO][4319] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:38.775245 containerd[1453]: 2025-01-13 21:27:38.737 [INFO][4319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:38.775245 containerd[1453]: 2025-01-13 21:27:38.762 [INFO][4357] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" HandleID="k8s-pod-network.6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Workload="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:38.775245 containerd[1453]: 2025-01-13 21:27:38.762 [INFO][4357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:38.775245 containerd[1453]: 2025-01-13 21:27:38.762 [INFO][4357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:38.775245 containerd[1453]: 2025-01-13 21:27:38.769 [WARNING][4357] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" HandleID="k8s-pod-network.6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Workload="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:38.775245 containerd[1453]: 2025-01-13 21:27:38.769 [INFO][4357] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" HandleID="k8s-pod-network.6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Workload="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:38.775245 containerd[1453]: 2025-01-13 21:27:38.771 [INFO][4357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:38.775245 containerd[1453]: 2025-01-13 21:27:38.773 [INFO][4319] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:38.776017 containerd[1453]: time="2025-01-13T21:27:38.775530936Z" level=info msg="TearDown network for sandbox \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\" successfully" Jan 13 21:27:38.776017 containerd[1453]: time="2025-01-13T21:27:38.775564008Z" level=info msg="StopPodSandbox for \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\" returns successfully" Jan 13 21:27:38.776575 containerd[1453]: time="2025-01-13T21:27:38.776550430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c4drk,Uid:4b709ff7-1b29-4a55-8a27-61c5d7be7f36,Namespace:calico-system,Attempt:1,}" Jan 13 21:27:38.779649 systemd[1]: run-netns-cni\x2d17d65239\x2d3b6f\x2d481e\x2dd913\x2dcaba4ab78019.mount: Deactivated successfully. Jan 13 21:27:38.786105 containerd[1453]: 2025-01-13 21:27:38.734 [INFO][4334] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:38.786105 containerd[1453]: 2025-01-13 21:27:38.735 [INFO][4334] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" iface="eth0" netns="/var/run/netns/cni-12710931-8d3c-2c33-74dd-14ab2676cc77" Jan 13 21:27:38.786105 containerd[1453]: 2025-01-13 21:27:38.735 [INFO][4334] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" iface="eth0" netns="/var/run/netns/cni-12710931-8d3c-2c33-74dd-14ab2676cc77" Jan 13 21:27:38.786105 containerd[1453]: 2025-01-13 21:27:38.736 [INFO][4334] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" iface="eth0" netns="/var/run/netns/cni-12710931-8d3c-2c33-74dd-14ab2676cc77" Jan 13 21:27:38.786105 containerd[1453]: 2025-01-13 21:27:38.736 [INFO][4334] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:38.786105 containerd[1453]: 2025-01-13 21:27:38.736 [INFO][4334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:38.786105 containerd[1453]: 2025-01-13 21:27:38.764 [INFO][4356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" HandleID="k8s-pod-network.65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Workload="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:38.786105 containerd[1453]: 2025-01-13 21:27:38.765 [INFO][4356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:38.786105 containerd[1453]: 2025-01-13 21:27:38.771 [INFO][4356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:38.786105 containerd[1453]: 2025-01-13 21:27:38.778 [WARNING][4356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" HandleID="k8s-pod-network.65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Workload="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:38.786105 containerd[1453]: 2025-01-13 21:27:38.778 [INFO][4356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" HandleID="k8s-pod-network.65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Workload="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:38.786105 containerd[1453]: 2025-01-13 21:27:38.781 [INFO][4356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:38.786105 containerd[1453]: 2025-01-13 21:27:38.783 [INFO][4334] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:38.786736 containerd[1453]: time="2025-01-13T21:27:38.786689386Z" level=info msg="TearDown network for sandbox \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\" successfully" Jan 13 21:27:38.786736 containerd[1453]: time="2025-01-13T21:27:38.786721857Z" level=info msg="StopPodSandbox for \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\" returns successfully" Jan 13 21:27:38.787198 kubelet[2578]: E0113 21:27:38.787165 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:38.788279 containerd[1453]: time="2025-01-13T21:27:38.787927801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lgbw8,Uid:f7353521-0488-482c-a756-367e20c4c1b4,Namespace:kube-system,Attempt:1,}" Jan 13 21:27:38.788866 systemd[1]: run-netns-cni\x2d12710931\x2d8d3c\x2d2c33\x2d74dd\x2d14ab2676cc77.mount: Deactivated successfully. Jan 13 21:27:38.788989 systemd-networkd[1394]: cali7ee35b0acc0: Gained IPv6LL Jan 13 21:27:38.802659 containerd[1453]: 2025-01-13 21:27:38.736 [INFO][4335] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:38.802659 containerd[1453]: 2025-01-13 21:27:38.737 [INFO][4335] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" iface="eth0" netns="/var/run/netns/cni-69dd3e79-68e3-017e-3d44-6b24b1e6ef63" Jan 13 21:27:38.802659 containerd[1453]: 2025-01-13 21:27:38.737 [INFO][4335] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" iface="eth0" netns="/var/run/netns/cni-69dd3e79-68e3-017e-3d44-6b24b1e6ef63" Jan 13 21:27:38.802659 containerd[1453]: 2025-01-13 21:27:38.737 [INFO][4335] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" iface="eth0" netns="/var/run/netns/cni-69dd3e79-68e3-017e-3d44-6b24b1e6ef63" Jan 13 21:27:38.802659 containerd[1453]: 2025-01-13 21:27:38.738 [INFO][4335] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:38.802659 containerd[1453]: 2025-01-13 21:27:38.738 [INFO][4335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:38.802659 containerd[1453]: 2025-01-13 21:27:38.774 [INFO][4358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" HandleID="k8s-pod-network.a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Workload="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:38.802659 containerd[1453]: 2025-01-13 21:27:38.774 [INFO][4358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:38.802659 containerd[1453]: 2025-01-13 21:27:38.781 [INFO][4358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:38.802659 containerd[1453]: 2025-01-13 21:27:38.786 [WARNING][4358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" HandleID="k8s-pod-network.a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Workload="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:38.802659 containerd[1453]: 2025-01-13 21:27:38.786 [INFO][4358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" HandleID="k8s-pod-network.a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Workload="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:38.802659 containerd[1453]: 2025-01-13 21:27:38.788 [INFO][4358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:38.802659 containerd[1453]: 2025-01-13 21:27:38.797 [INFO][4335] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:38.802659 containerd[1453]: time="2025-01-13T21:27:38.801208142Z" level=info msg="TearDown network for sandbox \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\" successfully" Jan 13 21:27:38.802659 containerd[1453]: time="2025-01-13T21:27:38.801233840Z" level=info msg="StopPodSandbox for \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\" returns successfully" Jan 13 21:27:38.803148 kubelet[2578]: E0113 21:27:38.802743 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:38.806965 containerd[1453]: time="2025-01-13T21:27:38.806745323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4dhwt,Uid:fe4f7602-38f2-4964-b7eb-58611454a234,Namespace:kube-system,Attempt:1,}" Jan 13 21:27:38.819404 containerd[1453]: 2025-01-13 21:27:38.735 [INFO][4320] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:38.819404 containerd[1453]: 2025-01-13 21:27:38.735 [INFO][4320] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" iface="eth0" netns="/var/run/netns/cni-3da23263-4319-9aec-396f-ead9b9f423fc" Jan 13 21:27:38.819404 containerd[1453]: 2025-01-13 21:27:38.735 [INFO][4320] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" iface="eth0" netns="/var/run/netns/cni-3da23263-4319-9aec-396f-ead9b9f423fc" Jan 13 21:27:38.819404 containerd[1453]: 2025-01-13 21:27:38.737 [INFO][4320] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" iface="eth0" netns="/var/run/netns/cni-3da23263-4319-9aec-396f-ead9b9f423fc" Jan 13 21:27:38.819404 containerd[1453]: 2025-01-13 21:27:38.737 [INFO][4320] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:38.819404 containerd[1453]: 2025-01-13 21:27:38.740 [INFO][4320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:38.819404 containerd[1453]: 2025-01-13 21:27:38.777 [INFO][4366] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" HandleID="k8s-pod-network.b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Workload="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:38.819404 containerd[1453]: 2025-01-13 21:27:38.777 [INFO][4366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:38.819404 containerd[1453]: 2025-01-13 21:27:38.788 [INFO][4366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:38.819404 containerd[1453]: 2025-01-13 21:27:38.804 [WARNING][4366] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" HandleID="k8s-pod-network.b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Workload="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:38.819404 containerd[1453]: 2025-01-13 21:27:38.804 [INFO][4366] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" HandleID="k8s-pod-network.b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Workload="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:38.819404 containerd[1453]: 2025-01-13 21:27:38.808 [INFO][4366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:38.819404 containerd[1453]: 2025-01-13 21:27:38.811 [INFO][4320] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:38.819954 containerd[1453]: time="2025-01-13T21:27:38.819551716Z" level=info msg="TearDown network for sandbox \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\" successfully" Jan 13 21:27:38.819954 containerd[1453]: time="2025-01-13T21:27:38.819576112Z" level=info msg="StopPodSandbox for \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\" returns successfully" Jan 13 21:27:38.820875 containerd[1453]: time="2025-01-13T21:27:38.820802474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b67c4f7b5-5nsgj,Uid:4c856975-e2ef-4c9d-acd0-da77d92975f0,Namespace:calico-system,Attempt:1,}" Jan 13 21:27:38.887931 systemd[1]: run-netns-cni\x2d3da23263\x2d4319\x2d9aec\x2d396f\x2dead9b9f423fc.mount: Deactivated successfully. Jan 13 21:27:38.888341 systemd[1]: run-netns-cni\x2d69dd3e79\x2d68e3\x2d017e\x2d3d44\x2d6b24b1e6ef63.mount: Deactivated successfully. Jan 13 21:27:38.919961 systemd-networkd[1394]: cali8888016d363: Link UP Jan 13 21:27:38.920510 systemd-networkd[1394]: cali8888016d363: Gained carrier Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.820 [INFO][4385] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.830 [INFO][4385] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--c4drk-eth0 csi-node-driver- calico-system 4b709ff7-1b29-4a55-8a27-61c5d7be7f36 936 0 2025-01-13 21:27:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-c4drk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8888016d363 [] []}} ContainerID="f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" Namespace="calico-system" Pod="csi-node-driver-c4drk" WorkloadEndpoint="localhost-k8s-csi--node--driver--c4drk-" Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.830 [INFO][4385] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" Namespace="calico-system" Pod="csi-node-driver-c4drk" WorkloadEndpoint="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.863 [INFO][4409] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" HandleID="k8s-pod-network.f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" Workload="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.875 [INFO][4409] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" HandleID="k8s-pod-network.f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" Workload="localhost-k8s-csi--node--driver--c4drk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375ef0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-c4drk", "timestamp":"2025-01-13 21:27:38.863373011 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.876 [INFO][4409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.876 [INFO][4409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.876 [INFO][4409] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.878 [INFO][4409] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" host="localhost" Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.884 [INFO][4409] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.890 [INFO][4409] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.893 [INFO][4409] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.895 [INFO][4409] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.895 [INFO][4409] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" host="localhost" Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.897 [INFO][4409] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4 Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.901 [INFO][4409] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" host="localhost" Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.912 [INFO][4409] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" host="localhost" Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.912 [INFO][4409] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" host="localhost" Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.912 [INFO][4409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:38.940493 containerd[1453]: 2025-01-13 21:27:38.912 [INFO][4409] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" HandleID="k8s-pod-network.f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" Workload="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:38.941095 containerd[1453]: 2025-01-13 21:27:38.916 [INFO][4385] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" Namespace="calico-system" Pod="csi-node-driver-c4drk" WorkloadEndpoint="localhost-k8s-csi--node--driver--c4drk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c4drk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b709ff7-1b29-4a55-8a27-61c5d7be7f36", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-c4drk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8888016d363", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:38.941095 containerd[1453]: 2025-01-13 21:27:38.916 [INFO][4385] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" Namespace="calico-system" Pod="csi-node-driver-c4drk" WorkloadEndpoint="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:38.941095 containerd[1453]: 2025-01-13 21:27:38.916 [INFO][4385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8888016d363 ContainerID="f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" Namespace="calico-system" Pod="csi-node-driver-c4drk" WorkloadEndpoint="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:38.941095 containerd[1453]: 2025-01-13 21:27:38.918 [INFO][4385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" Namespace="calico-system" Pod="csi-node-driver-c4drk" WorkloadEndpoint="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:38.941095 containerd[1453]: 2025-01-13 21:27:38.919 [INFO][4385] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" Namespace="calico-system" Pod="csi-node-driver-c4drk" WorkloadEndpoint="localhost-k8s-csi--node--driver--c4drk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c4drk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b709ff7-1b29-4a55-8a27-61c5d7be7f36", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4", Pod:"csi-node-driver-c4drk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8888016d363", MAC:"0e:0a:73:3a:b5:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:38.941095 containerd[1453]: 2025-01-13 21:27:38.929 [INFO][4385] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4" Namespace="calico-system" Pod="csi-node-driver-c4drk" WorkloadEndpoint="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:39.007862 containerd[1453]: time="2025-01-13T21:27:39.007644611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:39.007862 containerd[1453]: time="2025-01-13T21:27:39.007707950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:39.007862 containerd[1453]: time="2025-01-13T21:27:39.007725072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.008813 containerd[1453]: time="2025-01-13T21:27:39.008750275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.021281 systemd-networkd[1394]: cali49e4cede2af: Link UP Jan 13 21:27:39.021662 systemd-networkd[1394]: cali49e4cede2af: Gained carrier Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.851 [INFO][4398] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.862 [INFO][4398] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--lgbw8-eth0 coredns-76f75df574- kube-system f7353521-0488-482c-a756-367e20c4c1b4 937 0 2025-01-13 21:27:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-lgbw8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali49e4cede2af [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" Namespace="kube-system" Pod="coredns-76f75df574-lgbw8" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--lgbw8-" Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.862 [INFO][4398] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" Namespace="kube-system" Pod="coredns-76f75df574-lgbw8" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.914 [INFO][4444] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" HandleID="k8s-pod-network.10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" Workload="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.944 [INFO][4444] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" HandleID="k8s-pod-network.10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" Workload="localhost-k8s-coredns--76f75df574--lgbw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290ef0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-lgbw8", "timestamp":"2025-01-13 21:27:38.914279272 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.944 [INFO][4444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.944 [INFO][4444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.944 [INFO][4444] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.949 [INFO][4444] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" host="localhost" Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.958 [INFO][4444] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.968 [INFO][4444] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.971 [INFO][4444] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.976 [INFO][4444] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.976 [INFO][4444] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" host="localhost" Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.978 [INFO][4444] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293 Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:38.985 [INFO][4444] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" host="localhost" Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:39.004 [INFO][4444] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" host="localhost" Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:39.005 [INFO][4444] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" host="localhost" Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:39.007 [INFO][4444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:39.042556 containerd[1453]: 2025-01-13 21:27:39.008 [INFO][4444] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" HandleID="k8s-pod-network.10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" Workload="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:39.043192 containerd[1453]: 2025-01-13 21:27:39.011 [INFO][4398] cni-plugin/k8s.go 386: Populated endpoint ContainerID="10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" Namespace="kube-system" Pod="coredns-76f75df574-lgbw8" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--lgbw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--lgbw8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f7353521-0488-482c-a756-367e20c4c1b4", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-lgbw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49e4cede2af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:39.043192 containerd[1453]: 2025-01-13 21:27:39.012 [INFO][4398] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" Namespace="kube-system" Pod="coredns-76f75df574-lgbw8" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:39.043192 containerd[1453]: 2025-01-13 21:27:39.012 [INFO][4398] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49e4cede2af ContainerID="10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" Namespace="kube-system" Pod="coredns-76f75df574-lgbw8" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:39.043192 containerd[1453]: 2025-01-13 21:27:39.022 [INFO][4398] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" Namespace="kube-system" Pod="coredns-76f75df574-lgbw8" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:39.043192 containerd[1453]: 2025-01-13 21:27:39.025 [INFO][4398] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" Namespace="kube-system" Pod="coredns-76f75df574-lgbw8" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--lgbw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--lgbw8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f7353521-0488-482c-a756-367e20c4c1b4", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293", Pod:"coredns-76f75df574-lgbw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49e4cede2af", MAC:"3a:1e:83:0a:48:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:39.043192 containerd[1453]: 2025-01-13 21:27:39.037 [INFO][4398] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293" Namespace="kube-system" Pod="coredns-76f75df574-lgbw8" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:39.056486 systemd[1]: Started cri-containerd-f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4.scope - libcontainer container f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4. Jan 13 21:27:39.063496 systemd-networkd[1394]: cali73db2be64a6: Link UP Jan 13 21:27:39.064211 systemd-networkd[1394]: cali73db2be64a6: Gained carrier Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:38.866 [INFO][4416] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:38.882 [INFO][4416] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--4dhwt-eth0 coredns-76f75df574- kube-system fe4f7602-38f2-4964-b7eb-58611454a234 939 0 2025-01-13 21:27:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-4dhwt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali73db2be64a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" Namespace="kube-system" Pod="coredns-76f75df574-4dhwt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4dhwt-" Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:38.882 [INFO][4416] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" Namespace="kube-system" Pod="coredns-76f75df574-4dhwt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:38.944 [INFO][4452] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" HandleID="k8s-pod-network.38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" Workload="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:38.965 [INFO][4452] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" HandleID="k8s-pod-network.38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" Workload="localhost-k8s-coredns--76f75df574--4dhwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003601c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-4dhwt", "timestamp":"2025-01-13 21:27:38.944022889 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:38.965 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.009 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.009 [INFO][4452] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.012 [INFO][4452] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" host="localhost" Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.022 [INFO][4452] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.028 [INFO][4452] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.031 [INFO][4452] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.034 [INFO][4452] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.034 [INFO][4452] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" host="localhost" Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.038 [INFO][4452] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5 Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.046 [INFO][4452] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" host="localhost" Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.054 [INFO][4452] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" host="localhost" Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.055 [INFO][4452] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" host="localhost" Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.055 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:39.077866 containerd[1453]: 2025-01-13 21:27:39.055 [INFO][4452] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" HandleID="k8s-pod-network.38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" Workload="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:39.079551 containerd[1453]: 2025-01-13 21:27:39.059 [INFO][4416] cni-plugin/k8s.go 386: Populated endpoint ContainerID="38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" Namespace="kube-system" Pod="coredns-76f75df574-4dhwt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4dhwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4dhwt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fe4f7602-38f2-4964-b7eb-58611454a234", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-4dhwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73db2be64a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:39.079551 containerd[1453]: 2025-01-13 21:27:39.059 [INFO][4416] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" Namespace="kube-system" Pod="coredns-76f75df574-4dhwt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:39.079551 containerd[1453]: 2025-01-13 21:27:39.059 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73db2be64a6 ContainerID="38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" Namespace="kube-system" Pod="coredns-76f75df574-4dhwt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:39.079551 containerd[1453]: 2025-01-13 21:27:39.062 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" Namespace="kube-system" Pod="coredns-76f75df574-4dhwt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:39.079551 containerd[1453]: 2025-01-13 21:27:39.063 [INFO][4416] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" Namespace="kube-system" Pod="coredns-76f75df574-4dhwt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4dhwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4dhwt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fe4f7602-38f2-4964-b7eb-58611454a234", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5", Pod:"coredns-76f75df574-4dhwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73db2be64a6", MAC:"62:75:21:d4:97:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:39.079551 containerd[1453]: 2025-01-13 21:27:39.073 [INFO][4416] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5" Namespace="kube-system" Pod="coredns-76f75df574-4dhwt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:39.082791 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:27:39.083780 containerd[1453]: time="2025-01-13T21:27:39.083568162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:39.083780 containerd[1453]: time="2025-01-13T21:27:39.083646119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:39.083780 containerd[1453]: time="2025-01-13T21:27:39.083660736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.083931 containerd[1453]: time="2025-01-13T21:27:39.083752830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.097678 systemd-networkd[1394]: cali4c9b0e62e29: Link UP Jan 13 21:27:39.098676 systemd-networkd[1394]: cali4c9b0e62e29: Gained carrier Jan 13 21:27:39.102752 containerd[1453]: time="2025-01-13T21:27:39.102680358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c4drk,Uid:4b709ff7-1b29-4a55-8a27-61c5d7be7f36,Namespace:calico-system,Attempt:1,} returns sandbox id \"f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4\"" Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:38.889 [INFO][4431] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:38.906 [INFO][4431] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0 calico-kube-controllers-b67c4f7b5- calico-system 4c856975-e2ef-4c9d-acd0-da77d92975f0 938 0 2025-01-13 21:27:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b67c4f7b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b67c4f7b5-5nsgj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4c9b0e62e29 [] []}} ContainerID="5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" Namespace="calico-system" Pod="calico-kube-controllers-b67c4f7b5-5nsgj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-" Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:38.906 [INFO][4431] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" Namespace="calico-system" Pod="calico-kube-controllers-b67c4f7b5-5nsgj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:38.980 [INFO][4475] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" HandleID="k8s-pod-network.5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" Workload="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.002 [INFO][4475] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" HandleID="k8s-pod-network.5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" Workload="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042ab60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b67c4f7b5-5nsgj", "timestamp":"2025-01-13 21:27:38.980132592 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.002 [INFO][4475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.055 [INFO][4475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.055 [INFO][4475] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.057 [INFO][4475] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" host="localhost" Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.065 [INFO][4475] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.073 [INFO][4475] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.076 [INFO][4475] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.078 [INFO][4475] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.078 [INFO][4475] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" host="localhost" Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.079 [INFO][4475] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.084 [INFO][4475] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" host="localhost" Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.091 [INFO][4475] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" host="localhost" Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.091 [INFO][4475] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" host="localhost" Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.091 [INFO][4475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:39.114840 containerd[1453]: 2025-01-13 21:27:39.091 [INFO][4475] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" HandleID="k8s-pod-network.5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" Workload="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:39.115539 containerd[1453]: 2025-01-13 21:27:39.094 [INFO][4431] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" Namespace="calico-system" Pod="calico-kube-controllers-b67c4f7b5-5nsgj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0", GenerateName:"calico-kube-controllers-b67c4f7b5-", Namespace:"calico-system", SelfLink:"", UID:"4c856975-e2ef-4c9d-acd0-da77d92975f0", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b67c4f7b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b67c4f7b5-5nsgj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c9b0e62e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:39.115539 containerd[1453]: 2025-01-13 21:27:39.094 [INFO][4431] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" Namespace="calico-system" Pod="calico-kube-controllers-b67c4f7b5-5nsgj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:39.115539 containerd[1453]: 2025-01-13 21:27:39.095 [INFO][4431] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c9b0e62e29 ContainerID="5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" Namespace="calico-system" Pod="calico-kube-controllers-b67c4f7b5-5nsgj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:39.115539 containerd[1453]: 2025-01-13 21:27:39.099 [INFO][4431] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" Namespace="calico-system" Pod="calico-kube-controllers-b67c4f7b5-5nsgj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:39.115539 containerd[1453]: 2025-01-13 21:27:39.099 [INFO][4431] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" Namespace="calico-system" Pod="calico-kube-controllers-b67c4f7b5-5nsgj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0", GenerateName:"calico-kube-controllers-b67c4f7b5-", Namespace:"calico-system", SelfLink:"", UID:"4c856975-e2ef-4c9d-acd0-da77d92975f0", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b67c4f7b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f", Pod:"calico-kube-controllers-b67c4f7b5-5nsgj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c9b0e62e29", MAC:"ae:9c:9a:83:df:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:39.115539 containerd[1453]: 2025-01-13 21:27:39.109 [INFO][4431] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f" Namespace="calico-system" Pod="calico-kube-controllers-b67c4f7b5-5nsgj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:39.117480 systemd[1]: Started cri-containerd-10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293.scope - libcontainer container 10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293. Jan 13 21:27:39.130073 containerd[1453]: time="2025-01-13T21:27:39.129738091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:39.130073 containerd[1453]: time="2025-01-13T21:27:39.129847016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:39.130073 containerd[1453]: time="2025-01-13T21:27:39.129860351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.130073 containerd[1453]: time="2025-01-13T21:27:39.129961991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.136704 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:27:39.138722 containerd[1453]: time="2025-01-13T21:27:39.138597076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:39.138722 containerd[1453]: time="2025-01-13T21:27:39.138675453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:39.138722 containerd[1453]: time="2025-01-13T21:27:39.138695741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.138948 containerd[1453]: time="2025-01-13T21:27:39.138804595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.155491 systemd[1]: Started cri-containerd-38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5.scope - libcontainer container 38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5. Jan 13 21:27:39.159043 systemd[1]: Started cri-containerd-5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f.scope - libcontainer container 5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f. Jan 13 21:27:39.170561 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:27:39.191431 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:27:39.220907 containerd[1453]: time="2025-01-13T21:27:39.220861497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b67c4f7b5-5nsgj,Uid:4c856975-e2ef-4c9d-acd0-da77d92975f0,Namespace:calico-system,Attempt:1,} returns sandbox id \"5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f\"" Jan 13 21:27:39.222773 containerd[1453]: time="2025-01-13T21:27:39.222741795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4dhwt,Uid:fe4f7602-38f2-4964-b7eb-58611454a234,Namespace:kube-system,Attempt:1,} returns sandbox id \"38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5\"" Jan 13 21:27:39.224360 kubelet[2578]: E0113 21:27:39.224221 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:39.226470 containerd[1453]: time="2025-01-13T21:27:39.226406081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lgbw8,Uid:f7353521-0488-482c-a756-367e20c4c1b4,Namespace:kube-system,Attempt:1,} returns sandbox id \"10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293\"" Jan 13 21:27:39.227975 containerd[1453]: time="2025-01-13T21:27:39.227554646Z" level=info msg="CreateContainer within sandbox \"38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:27:39.228595 kubelet[2578]: E0113 21:27:39.228580 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:39.246506 containerd[1453]: time="2025-01-13T21:27:39.246449012Z" level=info msg="CreateContainer within sandbox \"10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:27:39.260366 containerd[1453]: time="2025-01-13T21:27:39.260307118Z" level=info msg="CreateContainer within sandbox \"38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ecfe83356bafdf446b3524c526f4941b9a56a9b5365e20e8337d6681b72233e3\"" Jan 13 21:27:39.261801 containerd[1453]: time="2025-01-13T21:27:39.261736088Z" level=info msg="StartContainer for \"ecfe83356bafdf446b3524c526f4941b9a56a9b5365e20e8337d6681b72233e3\"" Jan 13 21:27:39.279896 containerd[1453]: time="2025-01-13T21:27:39.279730085Z" level=info msg="CreateContainer within sandbox \"10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ad2428614267c408d9e930b73c0e9398d53e76876261300875a4fa1bdb09832\"" Jan 13 21:27:39.280869 containerd[1453]: time="2025-01-13T21:27:39.280837553Z" level=info msg="StartContainer for \"4ad2428614267c408d9e930b73c0e9398d53e76876261300875a4fa1bdb09832\"" Jan 13 21:27:39.301435 systemd[1]: Started cri-containerd-ecfe83356bafdf446b3524c526f4941b9a56a9b5365e20e8337d6681b72233e3.scope - libcontainer container ecfe83356bafdf446b3524c526f4941b9a56a9b5365e20e8337d6681b72233e3. Jan 13 21:27:39.314416 systemd[1]: Started cri-containerd-4ad2428614267c408d9e930b73c0e9398d53e76876261300875a4fa1bdb09832.scope - libcontainer container 4ad2428614267c408d9e930b73c0e9398d53e76876261300875a4fa1bdb09832. Jan 13 21:27:39.668223 containerd[1453]: time="2025-01-13T21:27:39.668168358Z" level=info msg="StopPodSandbox for \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\"" Jan 13 21:27:39.846003 containerd[1453]: time="2025-01-13T21:27:39.845942976Z" level=info msg="StartContainer for \"4ad2428614267c408d9e930b73c0e9398d53e76876261300875a4fa1bdb09832\" returns successfully" Jan 13 21:27:39.846422 containerd[1453]: time="2025-01-13T21:27:39.846036261Z" level=info msg="StartContainer for \"ecfe83356bafdf446b3524c526f4941b9a56a9b5365e20e8337d6681b72233e3\" returns successfully" Jan 13 21:27:39.856391 kubelet[2578]: E0113 21:27:39.856229 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:39.860357 kubelet[2578]: E0113 21:27:39.860332 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:39.912521 systemd[1]: Started sshd@13-10.0.0.116:22-10.0.0.1:54162.service - OpenSSH per-connection server daemon (10.0.0.1:54162). Jan 13 21:27:39.924158 containerd[1453]: 2025-01-13 21:27:39.817 [INFO][4804] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:39.924158 containerd[1453]: 2025-01-13 21:27:39.817 [INFO][4804] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" iface="eth0" netns="/var/run/netns/cni-aef01634-f41a-8093-f565-ec7eddebdaf1" Jan 13 21:27:39.924158 containerd[1453]: 2025-01-13 21:27:39.817 [INFO][4804] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" iface="eth0" netns="/var/run/netns/cni-aef01634-f41a-8093-f565-ec7eddebdaf1" Jan 13 21:27:39.924158 containerd[1453]: 2025-01-13 21:27:39.817 [INFO][4804] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" iface="eth0" netns="/var/run/netns/cni-aef01634-f41a-8093-f565-ec7eddebdaf1" Jan 13 21:27:39.924158 containerd[1453]: 2025-01-13 21:27:39.817 [INFO][4804] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:39.924158 containerd[1453]: 2025-01-13 21:27:39.818 [INFO][4804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:39.924158 containerd[1453]: 2025-01-13 21:27:39.844 [INFO][4815] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" HandleID="k8s-pod-network.76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Workload="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:39.924158 containerd[1453]: 2025-01-13 21:27:39.844 [INFO][4815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:39.924158 containerd[1453]: 2025-01-13 21:27:39.844 [INFO][4815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:39.924158 containerd[1453]: 2025-01-13 21:27:39.895 [WARNING][4815] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" HandleID="k8s-pod-network.76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Workload="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:39.924158 containerd[1453]: 2025-01-13 21:27:39.895 [INFO][4815] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" HandleID="k8s-pod-network.76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Workload="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:39.924158 containerd[1453]: 2025-01-13 21:27:39.915 [INFO][4815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:39.924158 containerd[1453]: 2025-01-13 21:27:39.920 [INFO][4804] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:39.924158 containerd[1453]: time="2025-01-13T21:27:39.923957261Z" level=info msg="TearDown network for sandbox \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\" successfully" Jan 13 21:27:39.924158 containerd[1453]: time="2025-01-13T21:27:39.924002847Z" level=info msg="StopPodSandbox for \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\" returns successfully" Jan 13 21:27:39.928387 systemd[1]: run-netns-cni\x2daef01634\x2df41a\x2d8093\x2df565\x2dec7eddebdaf1.mount: Deactivated successfully. Jan 13 21:27:39.929046 containerd[1453]: time="2025-01-13T21:27:39.928887343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795448cffc-7gjp9,Uid:4a1a44d8-4c43-4f74-8c7d-42c935a4693e,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:27:39.936419 kubelet[2578]: I0113 21:27:39.935459 2578 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4dhwt" podStartSLOduration=36.935386419 podStartE2EDuration="36.935386419s" podCreationTimestamp="2025-01-13 21:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:27:39.931086069 +0000 UTC m=+51.374488108" watchObservedRunningTime="2025-01-13 21:27:39.935386419 +0000 UTC m=+51.378788458" Jan 13 21:27:39.980651 sshd[4827]: Accepted publickey for core from 10.0.0.1 port 54162 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:39.983745 sshd[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:39.997016 systemd-logind[1437]: New session 14 of user core. Jan 13 21:27:40.003544 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:27:40.226294 sshd[4827]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:40.232552 systemd[1]: sshd@13-10.0.0.116:22-10.0.0.1:54162.service: Deactivated successfully. Jan 13 21:27:40.235043 systemd-networkd[1394]: calie32090bdfd6: Link UP Jan 13 21:27:40.235294 systemd-networkd[1394]: calie32090bdfd6: Gained carrier Jan 13 21:27:40.235530 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:27:40.236719 systemd-logind[1437]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:27:40.238325 systemd-logind[1437]: Removed session 14. Jan 13 21:27:40.249575 kubelet[2578]: I0113 21:27:40.249529 2578 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-lgbw8" podStartSLOduration=37.249471638 podStartE2EDuration="37.249471638s" podCreationTimestamp="2025-01-13 21:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:27:39.978108767 +0000 UTC m=+51.421510806" watchObservedRunningTime="2025-01-13 21:27:40.249471638 +0000 UTC m=+51.692873677" Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.058 [INFO][4830] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.081 [INFO][4830] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0 calico-apiserver-795448cffc- calico-apiserver 4a1a44d8-4c43-4f74-8c7d-42c935a4693e 966 0 2025-01-13 21:27:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:795448cffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-795448cffc-7gjp9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie32090bdfd6 [] []}} ContainerID="c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-7gjp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--7gjp9-" Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.081 [INFO][4830] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-7gjp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.156 [INFO][4866] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" HandleID="k8s-pod-network.c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" Workload="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.173 [INFO][4866] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" HandleID="k8s-pod-network.c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" Workload="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ac260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-795448cffc-7gjp9", "timestamp":"2025-01-13 21:27:40.156693344 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.173 [INFO][4866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.173 [INFO][4866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.173 [INFO][4866] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.176 [INFO][4866] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" host="localhost" Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.182 [INFO][4866] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.190 [INFO][4866] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.194 [INFO][4866] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.198 [INFO][4866] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.198 [INFO][4866] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" host="localhost" Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.199 [INFO][4866] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.215 [INFO][4866] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" host="localhost" Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.222 [INFO][4866] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" host="localhost" Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.224 [INFO][4866] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" host="localhost" Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.224 [INFO][4866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:40.252501 containerd[1453]: 2025-01-13 21:27:40.224 [INFO][4866] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" HandleID="k8s-pod-network.c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" Workload="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:40.253341 containerd[1453]: 2025-01-13 21:27:40.230 [INFO][4830] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-7gjp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0", GenerateName:"calico-apiserver-795448cffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a1a44d8-4c43-4f74-8c7d-42c935a4693e", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795448cffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-795448cffc-7gjp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie32090bdfd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:40.253341 containerd[1453]: 2025-01-13 21:27:40.230 [INFO][4830] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-7gjp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:40.253341 containerd[1453]: 2025-01-13 21:27:40.230 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie32090bdfd6 ContainerID="c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-7gjp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:40.253341 containerd[1453]: 2025-01-13 21:27:40.236 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-7gjp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:40.253341 containerd[1453]: 2025-01-13 21:27:40.237 [INFO][4830] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-7gjp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0", GenerateName:"calico-apiserver-795448cffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a1a44d8-4c43-4f74-8c7d-42c935a4693e", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795448cffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc", Pod:"calico-apiserver-795448cffc-7gjp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie32090bdfd6", MAC:"4a:15:3d:68:aa:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:40.253341 containerd[1453]: 2025-01-13 21:27:40.247 [INFO][4830] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc" Namespace="calico-apiserver" Pod="calico-apiserver-795448cffc-7gjp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:40.301179 containerd[1453]: time="2025-01-13T21:27:40.301047738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:40.301179 containerd[1453]: time="2025-01-13T21:27:40.301096178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:40.301179 containerd[1453]: time="2025-01-13T21:27:40.301113501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:40.301464 containerd[1453]: time="2025-01-13T21:27:40.301224400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:40.323599 systemd-networkd[1394]: cali4c9b0e62e29: Gained IPv6LL Jan 13 21:27:40.325502 systemd[1]: Started cri-containerd-c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc.scope - libcontainer container c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc. Jan 13 21:27:40.346038 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:27:40.379058 containerd[1453]: time="2025-01-13T21:27:40.379005492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795448cffc-7gjp9,Uid:4a1a44d8-4c43-4f74-8c7d-42c935a4693e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc\"" Jan 13 21:27:40.643551 systemd-networkd[1394]: cali73db2be64a6: Gained IPv6LL Jan 13 21:27:40.643900 systemd-networkd[1394]: cali49e4cede2af: Gained IPv6LL Jan 13 21:27:40.748338 containerd[1453]: time="2025-01-13T21:27:40.748260671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:40.749087 containerd[1453]: time="2025-01-13T21:27:40.749026688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 21:27:40.750561 containerd[1453]: time="2025-01-13T21:27:40.750528896Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:40.752737 containerd[1453]: time="2025-01-13T21:27:40.752671868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:40.753346 containerd[1453]: time="2025-01-13T21:27:40.753318340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.913926645s" Jan 13 21:27:40.753404 containerd[1453]: time="2025-01-13T21:27:40.753348978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:27:40.754055 containerd[1453]: time="2025-01-13T21:27:40.753907096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:27:40.756050 containerd[1453]: time="2025-01-13T21:27:40.756017385Z" level=info msg="CreateContainer within sandbox \"f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:27:40.769036 containerd[1453]: time="2025-01-13T21:27:40.768999044Z" level=info msg="CreateContainer within sandbox \"f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"202fcfd621ec3c0bc40b633aa8e83a650a27e3fcde12a577a7ddab7ba416e233\"" Jan 13 21:27:40.769600 containerd[1453]: time="2025-01-13T21:27:40.769562843Z" level=info msg="StartContainer for \"202fcfd621ec3c0bc40b633aa8e83a650a27e3fcde12a577a7ddab7ba416e233\"" Jan 13 21:27:40.771492 systemd-networkd[1394]: cali8888016d363: Gained IPv6LL Jan 13 21:27:40.801393 systemd[1]: Started cri-containerd-202fcfd621ec3c0bc40b633aa8e83a650a27e3fcde12a577a7ddab7ba416e233.scope - libcontainer container 202fcfd621ec3c0bc40b633aa8e83a650a27e3fcde12a577a7ddab7ba416e233. Jan 13 21:27:40.840580 containerd[1453]: time="2025-01-13T21:27:40.840516885Z" level=info msg="StartContainer for \"202fcfd621ec3c0bc40b633aa8e83a650a27e3fcde12a577a7ddab7ba416e233\" returns successfully" Jan 13 21:27:40.866750 kubelet[2578]: E0113 21:27:40.866100 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:40.868409 kubelet[2578]: E0113 21:27:40.868290 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:40.986683 kubelet[2578]: I0113 21:27:40.986521 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:27:40.987255 kubelet[2578]: E0113 21:27:40.987231 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:41.000448 kubelet[2578]: I0113 21:27:41.000406 2578 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-795448cffc-hz6r5" podStartSLOduration=28.08565205 podStartE2EDuration="31.000363789s" podCreationTimestamp="2025-01-13 21:27:10 +0000 UTC" firstStartedPulling="2025-01-13 21:27:37.838962069 +0000 UTC m=+49.282364108" lastFinishedPulling="2025-01-13 21:27:40.753673808 +0000 UTC m=+52.197075847" observedRunningTime="2025-01-13 21:27:40.878666109 +0000 UTC m=+52.322068148" watchObservedRunningTime="2025-01-13 21:27:41.000363789 +0000 UTC m=+52.443765828" Jan 13 21:27:41.604461 systemd-networkd[1394]: calie32090bdfd6: Gained IPv6LL Jan 13 21:27:41.867555 kubelet[2578]: I0113 21:27:41.867426 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:27:41.868451 kubelet[2578]: E0113 21:27:41.867985 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:41.868451 kubelet[2578]: E0113 21:27:41.868169 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:41.868451 kubelet[2578]: E0113 21:27:41.868400 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:42.142299 kernel: bpftool[5056]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:27:42.155979 containerd[1453]: time="2025-01-13T21:27:42.155641871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:42.156706 containerd[1453]: time="2025-01-13T21:27:42.156658338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:27:42.157654 containerd[1453]: time="2025-01-13T21:27:42.157613951Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:42.159876 containerd[1453]: time="2025-01-13T21:27:42.159842963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:42.160444 containerd[1453]: time="2025-01-13T21:27:42.160421308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.406478215s" Jan 13 21:27:42.160618 containerd[1453]: time="2025-01-13T21:27:42.160514613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:27:42.167247 containerd[1453]: time="2025-01-13T21:27:42.167210066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:27:42.168368 containerd[1453]: time="2025-01-13T21:27:42.168327202Z" level=info msg="CreateContainer within sandbox \"f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:27:42.288799 containerd[1453]: time="2025-01-13T21:27:42.288731068Z" level=info msg="CreateContainer within sandbox \"f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"90af0800d50aea253135ac8dd346235cb889c25b95a68ffa461231423f363d24\"" Jan 13 21:27:42.289630 containerd[1453]: time="2025-01-13T21:27:42.289583347Z" level=info msg="StartContainer for \"90af0800d50aea253135ac8dd346235cb889c25b95a68ffa461231423f363d24\"" Jan 13 21:27:42.325412 systemd[1]: Started cri-containerd-90af0800d50aea253135ac8dd346235cb889c25b95a68ffa461231423f363d24.scope - libcontainer container 90af0800d50aea253135ac8dd346235cb889c25b95a68ffa461231423f363d24. Jan 13 21:27:42.435361 systemd-networkd[1394]: vxlan.calico: Link UP Jan 13 21:27:42.435571 systemd-networkd[1394]: vxlan.calico: Gained carrier Jan 13 21:27:42.478729 containerd[1453]: time="2025-01-13T21:27:42.478686244Z" level=info msg="StartContainer for \"90af0800d50aea253135ac8dd346235cb889c25b95a68ffa461231423f363d24\" returns successfully" Jan 13 21:27:43.907469 systemd-networkd[1394]: vxlan.calico: Gained IPv6LL Jan 13 21:27:44.651650 containerd[1453]: time="2025-01-13T21:27:44.651582519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:44.652458 containerd[1453]: time="2025-01-13T21:27:44.652389773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 21:27:44.653469 containerd[1453]: time="2025-01-13T21:27:44.653440455Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:44.655838 containerd[1453]: time="2025-01-13T21:27:44.655792447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:44.656421 containerd[1453]: time="2025-01-13T21:27:44.656393315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.489153072s" Jan 13 21:27:44.656464 containerd[1453]: time="2025-01-13T21:27:44.656422310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 21:27:44.660388 containerd[1453]: time="2025-01-13T21:27:44.660346362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:27:44.667569 containerd[1453]: time="2025-01-13T21:27:44.667522495Z" level=info msg="CreateContainer within sandbox \"5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:27:44.682594 containerd[1453]: time="2025-01-13T21:27:44.682512119Z" level=info msg="CreateContainer within sandbox \"5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3f71da3e44600aa4134a92a8f7194182322b96739547a637efe1f3a1dc4d004a\"" Jan 13 21:27:44.683080 containerd[1453]: time="2025-01-13T21:27:44.683056580Z" level=info msg="StartContainer for \"3f71da3e44600aa4134a92a8f7194182322b96739547a637efe1f3a1dc4d004a\"" Jan 13 21:27:44.716450 systemd[1]: Started cri-containerd-3f71da3e44600aa4134a92a8f7194182322b96739547a637efe1f3a1dc4d004a.scope - libcontainer container 3f71da3e44600aa4134a92a8f7194182322b96739547a637efe1f3a1dc4d004a. Jan 13 21:27:45.240315 systemd[1]: Started sshd@14-10.0.0.116:22-10.0.0.1:54170.service - OpenSSH per-connection server daemon (10.0.0.1:54170). Jan 13 21:27:45.286437 sshd[5210]: Accepted publickey for core from 10.0.0.1 port 54170 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:45.287951 sshd[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:45.291821 systemd-logind[1437]: New session 15 of user core. Jan 13 21:27:45.301391 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:27:45.345592 containerd[1453]: time="2025-01-13T21:27:45.345527120Z" level=info msg="StartContainer for \"3f71da3e44600aa4134a92a8f7194182322b96739547a637efe1f3a1dc4d004a\" returns successfully" Jan 13 21:27:45.375505 containerd[1453]: time="2025-01-13T21:27:45.375361033Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:45.376114 containerd[1453]: time="2025-01-13T21:27:45.376065324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 21:27:45.378405 containerd[1453]: time="2025-01-13T21:27:45.378358126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 717.963895ms" Jan 13 21:27:45.378405 containerd[1453]: time="2025-01-13T21:27:45.378390016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:27:45.379240 containerd[1453]: time="2025-01-13T21:27:45.379047258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:27:45.380617 containerd[1453]: time="2025-01-13T21:27:45.380587078Z" level=info msg="CreateContainer within sandbox \"c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:27:45.396959 containerd[1453]: time="2025-01-13T21:27:45.396875608Z" level=info msg="CreateContainer within sandbox \"c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"99c938f9f3ce9d7150162fa21b9a72452031b0f30b93b64fe155dd933e71c4a5\"" Jan 13 21:27:45.398087 containerd[1453]: time="2025-01-13T21:27:45.397596690Z" level=info msg="StartContainer for \"99c938f9f3ce9d7150162fa21b9a72452031b0f30b93b64fe155dd933e71c4a5\"" Jan 13 21:27:45.428133 systemd[1]: Started cri-containerd-99c938f9f3ce9d7150162fa21b9a72452031b0f30b93b64fe155dd933e71c4a5.scope - libcontainer container 99c938f9f3ce9d7150162fa21b9a72452031b0f30b93b64fe155dd933e71c4a5. Jan 13 21:27:45.446763 sshd[5210]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:45.451146 systemd[1]: sshd@14-10.0.0.116:22-10.0.0.1:54170.service: Deactivated successfully. Jan 13 21:27:45.454784 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:27:45.455897 systemd-logind[1437]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:27:45.457392 systemd-logind[1437]: Removed session 15. Jan 13 21:27:45.478476 containerd[1453]: time="2025-01-13T21:27:45.478418053Z" level=info msg="StartContainer for \"99c938f9f3ce9d7150162fa21b9a72452031b0f30b93b64fe155dd933e71c4a5\" returns successfully" Jan 13 21:27:46.376301 kubelet[2578]: I0113 21:27:46.373048 2578 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b67c4f7b5-5nsgj" podStartSLOduration=29.941319376 podStartE2EDuration="35.372991412s" podCreationTimestamp="2025-01-13 21:27:11 +0000 UTC" firstStartedPulling="2025-01-13 21:27:39.225043654 +0000 UTC m=+50.668445693" lastFinishedPulling="2025-01-13 21:27:44.65671569 +0000 UTC m=+56.100117729" observedRunningTime="2025-01-13 21:27:45.365210969 +0000 UTC m=+56.808612998" watchObservedRunningTime="2025-01-13 21:27:46.372991412 +0000 UTC m=+57.816393451" Jan 13 21:27:46.444587 kubelet[2578]: I0113 21:27:46.444545 2578 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-795448cffc-7gjp9" podStartSLOduration=31.454285571 podStartE2EDuration="36.444494948s" podCreationTimestamp="2025-01-13 21:27:10 +0000 UTC" firstStartedPulling="2025-01-13 21:27:40.388561272 +0000 UTC m=+51.831963312" lastFinishedPulling="2025-01-13 21:27:45.37877065 +0000 UTC m=+56.822172689" observedRunningTime="2025-01-13 21:27:46.373695268 +0000 UTC m=+57.817097307" watchObservedRunningTime="2025-01-13 21:27:46.444494948 +0000 UTC m=+57.887896987" Jan 13 21:27:47.152637 containerd[1453]: time="2025-01-13T21:27:47.152543154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:47.153422 containerd[1453]: time="2025-01-13T21:27:47.153355396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:27:47.154464 containerd[1453]: time="2025-01-13T21:27:47.154433894Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:47.156632 containerd[1453]: time="2025-01-13T21:27:47.156600768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:47.157248 containerd[1453]: time="2025-01-13T21:27:47.157196060Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.778117563s" Jan 13 21:27:47.157248 containerd[1453]: time="2025-01-13T21:27:47.157247279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:27:47.159203 containerd[1453]: time="2025-01-13T21:27:47.159176143Z" level=info msg="CreateContainer within sandbox \"f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:27:47.177993 containerd[1453]: time="2025-01-13T21:27:47.177947684Z" level=info msg="CreateContainer within sandbox \"f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6251dd8cccaeb6dfcba71351e1de03086b1c7cc2421802408a388f0fa079a62e\"" Jan 13 21:27:47.178463 containerd[1453]: time="2025-01-13T21:27:47.178427002Z" level=info msg="StartContainer for \"6251dd8cccaeb6dfcba71351e1de03086b1c7cc2421802408a388f0fa079a62e\"" Jan 13 21:27:47.240390 systemd[1]: Started cri-containerd-6251dd8cccaeb6dfcba71351e1de03086b1c7cc2421802408a388f0fa079a62e.scope - libcontainer container 6251dd8cccaeb6dfcba71351e1de03086b1c7cc2421802408a388f0fa079a62e. Jan 13 21:27:47.269700 containerd[1453]: time="2025-01-13T21:27:47.269647794Z" level=info msg="StartContainer for \"6251dd8cccaeb6dfcba71351e1de03086b1c7cc2421802408a388f0fa079a62e\" returns successfully" Jan 13 21:27:47.378198 kubelet[2578]: I0113 21:27:47.378146 2578 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-c4drk" podStartSLOduration=28.326568575 podStartE2EDuration="36.378092734s" podCreationTimestamp="2025-01-13 21:27:11 +0000 UTC" firstStartedPulling="2025-01-13 21:27:39.106019574 +0000 UTC m=+50.549421613" lastFinishedPulling="2025-01-13 21:27:47.157543732 +0000 UTC m=+58.600945772" observedRunningTime="2025-01-13 21:27:47.375792302 +0000 UTC m=+58.819194361" watchObservedRunningTime="2025-01-13 21:27:47.378092734 +0000 UTC m=+58.821494773" Jan 13 21:27:47.740598 kubelet[2578]: I0113 21:27:47.740541 2578 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:27:47.741462 kubelet[2578]: I0113 21:27:47.741433 2578 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:27:48.643512 containerd[1453]: time="2025-01-13T21:27:48.643468199Z" level=info msg="StopPodSandbox for \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\"" Jan 13 21:27:48.726395 containerd[1453]: 2025-01-13 21:27:48.688 [WARNING][5353] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4dhwt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fe4f7602-38f2-4964-b7eb-58611454a234", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5", Pod:"coredns-76f75df574-4dhwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73db2be64a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:48.726395 containerd[1453]: 2025-01-13 21:27:48.689 [INFO][5353] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:48.726395 containerd[1453]: 2025-01-13 21:27:48.689 [INFO][5353] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" iface="eth0" netns="" Jan 13 21:27:48.726395 containerd[1453]: 2025-01-13 21:27:48.689 [INFO][5353] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:48.726395 containerd[1453]: 2025-01-13 21:27:48.689 [INFO][5353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:48.726395 containerd[1453]: 2025-01-13 21:27:48.713 [INFO][5362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" HandleID="k8s-pod-network.a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Workload="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:48.726395 containerd[1453]: 2025-01-13 21:27:48.713 [INFO][5362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:48.726395 containerd[1453]: 2025-01-13 21:27:48.713 [INFO][5362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:48.726395 containerd[1453]: 2025-01-13 21:27:48.718 [WARNING][5362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" HandleID="k8s-pod-network.a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Workload="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:48.726395 containerd[1453]: 2025-01-13 21:27:48.718 [INFO][5362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" HandleID="k8s-pod-network.a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Workload="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:48.726395 containerd[1453]: 2025-01-13 21:27:48.720 [INFO][5362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:48.726395 containerd[1453]: 2025-01-13 21:27:48.722 [INFO][5353] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:48.727037 containerd[1453]: time="2025-01-13T21:27:48.726440318Z" level=info msg="TearDown network for sandbox \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\" successfully" Jan 13 21:27:48.727037 containerd[1453]: time="2025-01-13T21:27:48.726467310Z" level=info msg="StopPodSandbox for \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\" returns successfully" Jan 13 21:27:48.733682 containerd[1453]: time="2025-01-13T21:27:48.733628108Z" level=info msg="RemovePodSandbox for \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\"" Jan 13 21:27:48.735766 containerd[1453]: time="2025-01-13T21:27:48.735738960Z" level=info msg="Forcibly stopping sandbox \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\"" Jan 13 21:27:48.813084 containerd[1453]: 2025-01-13 21:27:48.775 [WARNING][5385] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--4dhwt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fe4f7602-38f2-4964-b7eb-58611454a234", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38301313a52a881b3c09710fc820c4c5b89332266b2865c3bef95f989a2a2fd5", Pod:"coredns-76f75df574-4dhwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73db2be64a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:48.813084 containerd[1453]: 2025-01-13 21:27:48.775 [INFO][5385] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:48.813084 containerd[1453]: 2025-01-13 21:27:48.775 [INFO][5385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" iface="eth0" netns="" Jan 13 21:27:48.813084 containerd[1453]: 2025-01-13 21:27:48.775 [INFO][5385] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:48.813084 containerd[1453]: 2025-01-13 21:27:48.775 [INFO][5385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:48.813084 containerd[1453]: 2025-01-13 21:27:48.800 [INFO][5393] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" HandleID="k8s-pod-network.a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Workload="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:48.813084 containerd[1453]: 2025-01-13 21:27:48.800 [INFO][5393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:48.813084 containerd[1453]: 2025-01-13 21:27:48.801 [INFO][5393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:48.813084 containerd[1453]: 2025-01-13 21:27:48.805 [WARNING][5393] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" HandleID="k8s-pod-network.a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Workload="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:48.813084 containerd[1453]: 2025-01-13 21:27:48.805 [INFO][5393] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" HandleID="k8s-pod-network.a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Workload="localhost-k8s-coredns--76f75df574--4dhwt-eth0" Jan 13 21:27:48.813084 containerd[1453]: 2025-01-13 21:27:48.807 [INFO][5393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:48.813084 containerd[1453]: 2025-01-13 21:27:48.810 [INFO][5385] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0" Jan 13 21:27:48.813530 containerd[1453]: time="2025-01-13T21:27:48.813134275Z" level=info msg="TearDown network for sandbox \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\" successfully" Jan 13 21:27:48.952484 containerd[1453]: time="2025-01-13T21:27:48.952332053Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:27:48.952484 containerd[1453]: time="2025-01-13T21:27:48.952469338Z" level=info msg="RemovePodSandbox \"a48b270e2ab10b6f15f3f2626562b860354d7e928e816ca6a9454e4bc82502b0\" returns successfully" Jan 13 21:27:48.953246 containerd[1453]: time="2025-01-13T21:27:48.953200933Z" level=info msg="StopPodSandbox for \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\"" Jan 13 21:27:49.032947 containerd[1453]: 2025-01-13 21:27:48.996 [WARNING][5420] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0", GenerateName:"calico-apiserver-795448cffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a1a44d8-4c43-4f74-8c7d-42c935a4693e", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795448cffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc", Pod:"calico-apiserver-795448cffc-7gjp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie32090bdfd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:49.032947 containerd[1453]: 2025-01-13 21:27:48.996 [INFO][5420] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:49.032947 containerd[1453]: 2025-01-13 21:27:48.996 [INFO][5420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" iface="eth0" netns="" Jan 13 21:27:49.032947 containerd[1453]: 2025-01-13 21:27:48.996 [INFO][5420] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:49.032947 containerd[1453]: 2025-01-13 21:27:48.996 [INFO][5420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:49.032947 containerd[1453]: 2025-01-13 21:27:49.020 [INFO][5427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" HandleID="k8s-pod-network.76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Workload="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:49.032947 containerd[1453]: 2025-01-13 21:27:49.020 [INFO][5427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:49.032947 containerd[1453]: 2025-01-13 21:27:49.020 [INFO][5427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:49.032947 containerd[1453]: 2025-01-13 21:27:49.026 [WARNING][5427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" HandleID="k8s-pod-network.76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Workload="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:49.032947 containerd[1453]: 2025-01-13 21:27:49.026 [INFO][5427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" HandleID="k8s-pod-network.76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Workload="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:49.032947 containerd[1453]: 2025-01-13 21:27:49.027 [INFO][5427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:49.032947 containerd[1453]: 2025-01-13 21:27:49.029 [INFO][5420] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:49.033620 containerd[1453]: time="2025-01-13T21:27:49.032988772Z" level=info msg="TearDown network for sandbox \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\" successfully" Jan 13 21:27:49.033620 containerd[1453]: time="2025-01-13T21:27:49.033016806Z" level=info msg="StopPodSandbox for \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\" returns successfully" Jan 13 21:27:49.033810 containerd[1453]: time="2025-01-13T21:27:49.033760122Z" level=info msg="RemovePodSandbox for \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\"" Jan 13 21:27:49.033846 containerd[1453]: time="2025-01-13T21:27:49.033816671Z" level=info msg="Forcibly stopping sandbox \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\"" Jan 13 21:27:49.103972 containerd[1453]: 2025-01-13 21:27:49.069 [WARNING][5452] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0", GenerateName:"calico-apiserver-795448cffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a1a44d8-4c43-4f74-8c7d-42c935a4693e", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795448cffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c382545eac3bf20af3aeea2ed60d46176d5c02364c4d43f31270545567814cfc", Pod:"calico-apiserver-795448cffc-7gjp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie32090bdfd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:49.103972 containerd[1453]: 2025-01-13 21:27:49.069 [INFO][5452] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:49.103972 containerd[1453]: 2025-01-13 21:27:49.069 [INFO][5452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" iface="eth0" netns="" Jan 13 21:27:49.103972 containerd[1453]: 2025-01-13 21:27:49.070 [INFO][5452] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:49.103972 containerd[1453]: 2025-01-13 21:27:49.070 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:49.103972 containerd[1453]: 2025-01-13 21:27:49.092 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" HandleID="k8s-pod-network.76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Workload="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:49.103972 containerd[1453]: 2025-01-13 21:27:49.092 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:49.103972 containerd[1453]: 2025-01-13 21:27:49.092 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:49.103972 containerd[1453]: 2025-01-13 21:27:49.097 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" HandleID="k8s-pod-network.76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Workload="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:49.103972 containerd[1453]: 2025-01-13 21:27:49.097 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" HandleID="k8s-pod-network.76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Workload="localhost-k8s-calico--apiserver--795448cffc--7gjp9-eth0" Jan 13 21:27:49.103972 containerd[1453]: 2025-01-13 21:27:49.098 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:49.103972 containerd[1453]: 2025-01-13 21:27:49.101 [INFO][5452] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50" Jan 13 21:27:49.104436 containerd[1453]: time="2025-01-13T21:27:49.104013142Z" level=info msg="TearDown network for sandbox \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\" successfully" Jan 13 21:27:49.123298 containerd[1453]: time="2025-01-13T21:27:49.123233740Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:27:49.123370 containerd[1453]: time="2025-01-13T21:27:49.123339795Z" level=info msg="RemovePodSandbox \"76a5495263412c8c651492758d4ee3870c143fc6e3309bb6614ee07cf9d35c50\" returns successfully" Jan 13 21:27:49.123986 containerd[1453]: time="2025-01-13T21:27:49.123949814Z" level=info msg="StopPodSandbox for \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\"" Jan 13 21:27:49.207628 containerd[1453]: 2025-01-13 21:27:49.162 [WARNING][5481] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--lgbw8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f7353521-0488-482c-a756-367e20c4c1b4", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293", Pod:"coredns-76f75df574-lgbw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49e4cede2af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:49.207628 containerd[1453]: 2025-01-13 21:27:49.163 [INFO][5481] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:49.207628 containerd[1453]: 2025-01-13 21:27:49.163 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" iface="eth0" netns="" Jan 13 21:27:49.207628 containerd[1453]: 2025-01-13 21:27:49.163 [INFO][5481] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:49.207628 containerd[1453]: 2025-01-13 21:27:49.163 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:49.207628 containerd[1453]: 2025-01-13 21:27:49.194 [INFO][5489] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" HandleID="k8s-pod-network.65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Workload="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:49.207628 containerd[1453]: 2025-01-13 21:27:49.194 [INFO][5489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:49.207628 containerd[1453]: 2025-01-13 21:27:49.194 [INFO][5489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:49.207628 containerd[1453]: 2025-01-13 21:27:49.200 [WARNING][5489] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" HandleID="k8s-pod-network.65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Workload="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:49.207628 containerd[1453]: 2025-01-13 21:27:49.200 [INFO][5489] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" HandleID="k8s-pod-network.65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Workload="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:49.207628 containerd[1453]: 2025-01-13 21:27:49.201 [INFO][5489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:49.207628 containerd[1453]: 2025-01-13 21:27:49.204 [INFO][5481] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:49.207628 containerd[1453]: time="2025-01-13T21:27:49.207556305Z" level=info msg="TearDown network for sandbox \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\" successfully" Jan 13 21:27:49.207628 containerd[1453]: time="2025-01-13T21:27:49.207593466Z" level=info msg="StopPodSandbox for \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\" returns successfully" Jan 13 21:27:49.208370 containerd[1453]: time="2025-01-13T21:27:49.208172044Z" level=info msg="RemovePodSandbox for \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\"" Jan 13 21:27:49.208370 containerd[1453]: time="2025-01-13T21:27:49.208195329Z" level=info msg="Forcibly stopping sandbox \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\"" Jan 13 21:27:49.282421 containerd[1453]: 2025-01-13 21:27:49.245 [WARNING][5511] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--lgbw8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f7353521-0488-482c-a756-367e20c4c1b4", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10ef07a26a543d6f90bdf0d4987b31c8096062f3523a1f3ce5d8e719a03a0293", Pod:"coredns-76f75df574-lgbw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49e4cede2af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:49.282421 containerd[1453]: 2025-01-13 21:27:49.246 [INFO][5511] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:49.282421 containerd[1453]: 2025-01-13 21:27:49.246 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" iface="eth0" netns="" Jan 13 21:27:49.282421 containerd[1453]: 2025-01-13 21:27:49.247 [INFO][5511] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:49.282421 containerd[1453]: 2025-01-13 21:27:49.247 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:49.282421 containerd[1453]: 2025-01-13 21:27:49.271 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" HandleID="k8s-pod-network.65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Workload="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:49.282421 containerd[1453]: 2025-01-13 21:27:49.271 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:49.282421 containerd[1453]: 2025-01-13 21:27:49.271 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:49.282421 containerd[1453]: 2025-01-13 21:27:49.276 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" HandleID="k8s-pod-network.65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Workload="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:49.282421 containerd[1453]: 2025-01-13 21:27:49.276 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" HandleID="k8s-pod-network.65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Workload="localhost-k8s-coredns--76f75df574--lgbw8-eth0" Jan 13 21:27:49.282421 containerd[1453]: 2025-01-13 21:27:49.277 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:49.282421 containerd[1453]: 2025-01-13 21:27:49.280 [INFO][5511] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505" Jan 13 21:27:49.282881 containerd[1453]: time="2025-01-13T21:27:49.282493997Z" level=info msg="TearDown network for sandbox \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\" successfully" Jan 13 21:27:49.287328 containerd[1453]: time="2025-01-13T21:27:49.287248082Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:27:49.287380 containerd[1453]: time="2025-01-13T21:27:49.287355920Z" level=info msg="RemovePodSandbox \"65f86b97f14f5b832bf4287ca367202215a260e3e67640c44dc63b436eb5d505\" returns successfully" Jan 13 21:27:49.287913 containerd[1453]: time="2025-01-13T21:27:49.287877318Z" level=info msg="StopPodSandbox for \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\"" Jan 13 21:27:49.361194 containerd[1453]: 2025-01-13 21:27:49.326 [WARNING][5540] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0", GenerateName:"calico-apiserver-795448cffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1575694f-4276-4e47-b4a5-e229a7267251", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795448cffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436", Pod:"calico-apiserver-795448cffc-hz6r5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ee35b0acc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:49.361194 containerd[1453]: 2025-01-13 21:27:49.327 [INFO][5540] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:49.361194 containerd[1453]: 2025-01-13 21:27:49.327 [INFO][5540] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" iface="eth0" netns="" Jan 13 21:27:49.361194 containerd[1453]: 2025-01-13 21:27:49.327 [INFO][5540] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:49.361194 containerd[1453]: 2025-01-13 21:27:49.327 [INFO][5540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:49.361194 containerd[1453]: 2025-01-13 21:27:49.349 [INFO][5548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" HandleID="k8s-pod-network.645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Workload="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:49.361194 containerd[1453]: 2025-01-13 21:27:49.349 [INFO][5548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:49.361194 containerd[1453]: 2025-01-13 21:27:49.349 [INFO][5548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:49.361194 containerd[1453]: 2025-01-13 21:27:49.354 [WARNING][5548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" HandleID="k8s-pod-network.645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Workload="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:49.361194 containerd[1453]: 2025-01-13 21:27:49.354 [INFO][5548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" HandleID="k8s-pod-network.645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Workload="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:49.361194 containerd[1453]: 2025-01-13 21:27:49.356 [INFO][5548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:49.361194 containerd[1453]: 2025-01-13 21:27:49.358 [INFO][5540] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:49.361955 containerd[1453]: time="2025-01-13T21:27:49.361253763Z" level=info msg="TearDown network for sandbox \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\" successfully" Jan 13 21:27:49.361955 containerd[1453]: time="2025-01-13T21:27:49.361310833Z" level=info msg="StopPodSandbox for \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\" returns successfully" Jan 13 21:27:49.362050 containerd[1453]: time="2025-01-13T21:27:49.362014973Z" level=info msg="RemovePodSandbox for \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\"" Jan 13 21:27:49.362087 containerd[1453]: time="2025-01-13T21:27:49.362059179Z" level=info msg="Forcibly stopping sandbox \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\"" Jan 13 21:27:49.436897 containerd[1453]: 2025-01-13 21:27:49.401 [WARNING][5571] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0", GenerateName:"calico-apiserver-795448cffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1575694f-4276-4e47-b4a5-e229a7267251", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795448cffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f44f781c5712a80900c0ebc2fdca8d0934900dbde876d37fd568059792930436", Pod:"calico-apiserver-795448cffc-hz6r5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ee35b0acc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:49.436897 containerd[1453]: 2025-01-13 21:27:49.401 [INFO][5571] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:49.436897 containerd[1453]: 2025-01-13 21:27:49.401 [INFO][5571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" iface="eth0" netns="" Jan 13 21:27:49.436897 containerd[1453]: 2025-01-13 21:27:49.401 [INFO][5571] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:49.436897 containerd[1453]: 2025-01-13 21:27:49.401 [INFO][5571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:49.436897 containerd[1453]: 2025-01-13 21:27:49.424 [INFO][5578] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" HandleID="k8s-pod-network.645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Workload="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:49.436897 containerd[1453]: 2025-01-13 21:27:49.425 [INFO][5578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:49.436897 containerd[1453]: 2025-01-13 21:27:49.425 [INFO][5578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:49.436897 containerd[1453]: 2025-01-13 21:27:49.430 [WARNING][5578] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" HandleID="k8s-pod-network.645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Workload="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:49.436897 containerd[1453]: 2025-01-13 21:27:49.430 [INFO][5578] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" HandleID="k8s-pod-network.645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Workload="localhost-k8s-calico--apiserver--795448cffc--hz6r5-eth0" Jan 13 21:27:49.436897 containerd[1453]: 2025-01-13 21:27:49.432 [INFO][5578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:49.436897 containerd[1453]: 2025-01-13 21:27:49.434 [INFO][5571] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097" Jan 13 21:27:49.437364 containerd[1453]: time="2025-01-13T21:27:49.436918549Z" level=info msg="TearDown network for sandbox \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\" successfully" Jan 13 21:27:49.440866 containerd[1453]: time="2025-01-13T21:27:49.440801873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:27:49.440866 containerd[1453]: time="2025-01-13T21:27:49.440856999Z" level=info msg="RemovePodSandbox \"645ffd06a90ea92e81d07ede3573dfa41145dd0a7562d2e4f5a559bc1d081097\" returns successfully" Jan 13 21:27:49.441574 containerd[1453]: time="2025-01-13T21:27:49.441529729Z" level=info msg="StopPodSandbox for \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\"" Jan 13 21:27:49.510094 containerd[1453]: 2025-01-13 21:27:49.477 [WARNING][5601] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c4drk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b709ff7-1b29-4a55-8a27-61c5d7be7f36", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4", Pod:"csi-node-driver-c4drk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8888016d363", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:49.510094 containerd[1453]: 2025-01-13 21:27:49.478 [INFO][5601] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:49.510094 containerd[1453]: 2025-01-13 21:27:49.478 [INFO][5601] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" iface="eth0" netns="" Jan 13 21:27:49.510094 containerd[1453]: 2025-01-13 21:27:49.478 [INFO][5601] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:49.510094 containerd[1453]: 2025-01-13 21:27:49.478 [INFO][5601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:49.510094 containerd[1453]: 2025-01-13 21:27:49.498 [INFO][5609] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" HandleID="k8s-pod-network.6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Workload="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:49.510094 containerd[1453]: 2025-01-13 21:27:49.499 [INFO][5609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:49.510094 containerd[1453]: 2025-01-13 21:27:49.499 [INFO][5609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:49.510094 containerd[1453]: 2025-01-13 21:27:49.504 [WARNING][5609] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" HandleID="k8s-pod-network.6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Workload="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:49.510094 containerd[1453]: 2025-01-13 21:27:49.504 [INFO][5609] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" HandleID="k8s-pod-network.6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Workload="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:49.510094 containerd[1453]: 2025-01-13 21:27:49.505 [INFO][5609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:49.510094 containerd[1453]: 2025-01-13 21:27:49.507 [INFO][5601] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:49.510548 containerd[1453]: time="2025-01-13T21:27:49.510126419Z" level=info msg="TearDown network for sandbox \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\" successfully" Jan 13 21:27:49.510548 containerd[1453]: time="2025-01-13T21:27:49.510154503Z" level=info msg="StopPodSandbox for \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\" returns successfully" Jan 13 21:27:49.510889 containerd[1453]: time="2025-01-13T21:27:49.510823496Z" level=info msg="RemovePodSandbox for \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\"" Jan 13 21:27:49.510889 containerd[1453]: time="2025-01-13T21:27:49.510876628Z" level=info msg="Forcibly stopping sandbox \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\"" Jan 13 21:27:49.586303 containerd[1453]: 2025-01-13 21:27:49.550 [WARNING][5632] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c4drk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b709ff7-1b29-4a55-8a27-61c5d7be7f36", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f7ee9330768de0f896926cca854e5bb017fe62b5591339384170f23753bd1ae4", Pod:"csi-node-driver-c4drk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8888016d363", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:49.586303 containerd[1453]: 2025-01-13 21:27:49.551 [INFO][5632] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:49.586303 containerd[1453]: 2025-01-13 21:27:49.551 [INFO][5632] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" iface="eth0" netns="" Jan 13 21:27:49.586303 containerd[1453]: 2025-01-13 21:27:49.551 [INFO][5632] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:49.586303 containerd[1453]: 2025-01-13 21:27:49.551 [INFO][5632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:49.586303 containerd[1453]: 2025-01-13 21:27:49.573 [INFO][5640] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" HandleID="k8s-pod-network.6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Workload="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:49.586303 containerd[1453]: 2025-01-13 21:27:49.573 [INFO][5640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:49.586303 containerd[1453]: 2025-01-13 21:27:49.573 [INFO][5640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:49.586303 containerd[1453]: 2025-01-13 21:27:49.578 [WARNING][5640] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" HandleID="k8s-pod-network.6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Workload="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:49.586303 containerd[1453]: 2025-01-13 21:27:49.578 [INFO][5640] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" HandleID="k8s-pod-network.6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Workload="localhost-k8s-csi--node--driver--c4drk-eth0" Jan 13 21:27:49.586303 containerd[1453]: 2025-01-13 21:27:49.580 [INFO][5640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:49.586303 containerd[1453]: 2025-01-13 21:27:49.583 [INFO][5632] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac" Jan 13 21:27:49.586837 containerd[1453]: time="2025-01-13T21:27:49.586366037Z" level=info msg="TearDown network for sandbox \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\" successfully" Jan 13 21:27:49.590161 containerd[1453]: time="2025-01-13T21:27:49.590133225Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:27:49.590214 containerd[1453]: time="2025-01-13T21:27:49.590186508Z" level=info msg="RemovePodSandbox \"6ee650686c3b7aaf7e19ad09cc925b85ce08c1cf3f804d4c65cc3784b233d3ac\" returns successfully" Jan 13 21:27:49.590810 containerd[1453]: time="2025-01-13T21:27:49.590781909Z" level=info msg="StopPodSandbox for \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\"" Jan 13 21:27:49.656237 containerd[1453]: 2025-01-13 21:27:49.625 [WARNING][5662] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0", GenerateName:"calico-kube-controllers-b67c4f7b5-", Namespace:"calico-system", SelfLink:"", UID:"4c856975-e2ef-4c9d-acd0-da77d92975f0", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b67c4f7b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f", Pod:"calico-kube-controllers-b67c4f7b5-5nsgj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c9b0e62e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:49.656237 containerd[1453]: 2025-01-13 21:27:49.625 [INFO][5662] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:49.656237 containerd[1453]: 2025-01-13 21:27:49.625 [INFO][5662] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" iface="eth0" netns="" Jan 13 21:27:49.656237 containerd[1453]: 2025-01-13 21:27:49.625 [INFO][5662] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:49.656237 containerd[1453]: 2025-01-13 21:27:49.625 [INFO][5662] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:49.656237 containerd[1453]: 2025-01-13 21:27:49.644 [INFO][5670] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" HandleID="k8s-pod-network.b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Workload="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:49.656237 containerd[1453]: 2025-01-13 21:27:49.644 [INFO][5670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:49.656237 containerd[1453]: 2025-01-13 21:27:49.644 [INFO][5670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:49.656237 containerd[1453]: 2025-01-13 21:27:49.650 [WARNING][5670] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" HandleID="k8s-pod-network.b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Workload="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:49.656237 containerd[1453]: 2025-01-13 21:27:49.650 [INFO][5670] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" HandleID="k8s-pod-network.b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Workload="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:49.656237 containerd[1453]: 2025-01-13 21:27:49.651 [INFO][5670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:49.656237 containerd[1453]: 2025-01-13 21:27:49.653 [INFO][5662] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:49.656968 containerd[1453]: time="2025-01-13T21:27:49.656320440Z" level=info msg="TearDown network for sandbox \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\" successfully" Jan 13 21:27:49.656968 containerd[1453]: time="2025-01-13T21:27:49.656359295Z" level=info msg="StopPodSandbox for \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\" returns successfully" Jan 13 21:27:49.657024 containerd[1453]: time="2025-01-13T21:27:49.656957240Z" level=info msg="RemovePodSandbox for \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\"" Jan 13 21:27:49.657024 containerd[1453]: time="2025-01-13T21:27:49.656990725Z" level=info msg="Forcibly stopping sandbox \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\"" Jan 13 21:27:49.740319 containerd[1453]: 2025-01-13 21:27:49.697 [WARNING][5692] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0", GenerateName:"calico-kube-controllers-b67c4f7b5-", Namespace:"calico-system", SelfLink:"", UID:"4c856975-e2ef-4c9d-acd0-da77d92975f0", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b67c4f7b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ff2ebbc1e27b7b4b70740f834b710b906f091c19c3872bfa36caee88300331f", Pod:"calico-kube-controllers-b67c4f7b5-5nsgj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c9b0e62e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:27:49.740319 containerd[1453]: 2025-01-13 21:27:49.697 [INFO][5692] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:49.740319 containerd[1453]: 2025-01-13 21:27:49.698 [INFO][5692] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" iface="eth0" netns="" Jan 13 21:27:49.740319 containerd[1453]: 2025-01-13 21:27:49.698 [INFO][5692] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:49.740319 containerd[1453]: 2025-01-13 21:27:49.698 [INFO][5692] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:49.740319 containerd[1453]: 2025-01-13 21:27:49.725 [INFO][5699] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" HandleID="k8s-pod-network.b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Workload="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:49.740319 containerd[1453]: 2025-01-13 21:27:49.725 [INFO][5699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:27:49.740319 containerd[1453]: 2025-01-13 21:27:49.725 [INFO][5699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:27:49.740319 containerd[1453]: 2025-01-13 21:27:49.732 [WARNING][5699] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" HandleID="k8s-pod-network.b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Workload="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:49.740319 containerd[1453]: 2025-01-13 21:27:49.732 [INFO][5699] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" HandleID="k8s-pod-network.b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Workload="localhost-k8s-calico--kube--controllers--b67c4f7b5--5nsgj-eth0" Jan 13 21:27:49.740319 containerd[1453]: 2025-01-13 21:27:49.734 [INFO][5699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:27:49.740319 containerd[1453]: 2025-01-13 21:27:49.737 [INFO][5692] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8" Jan 13 21:27:49.740760 containerd[1453]: time="2025-01-13T21:27:49.740381549Z" level=info msg="TearDown network for sandbox \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\" successfully" Jan 13 21:27:49.744691 containerd[1453]: time="2025-01-13T21:27:49.744640658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:27:49.744774 containerd[1453]: time="2025-01-13T21:27:49.744717608Z" level=info msg="RemovePodSandbox \"b19cdfe59d97b871f484398e70eaf9ed4441a73d02a6f2838b44aa0e70ba94a8\" returns successfully" Jan 13 21:27:50.458628 systemd[1]: Started sshd@15-10.0.0.116:22-10.0.0.1:38096.service - OpenSSH per-connection server daemon (10.0.0.1:38096). Jan 13 21:27:50.501617 sshd[5707]: Accepted publickey for core from 10.0.0.1 port 38096 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:50.503459 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:50.507387 systemd-logind[1437]: New session 16 of user core. Jan 13 21:27:50.516391 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:27:50.639776 sshd[5707]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:50.644107 systemd[1]: sshd@15-10.0.0.116:22-10.0.0.1:38096.service: Deactivated successfully. Jan 13 21:27:50.646249 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:27:50.646929 systemd-logind[1437]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:27:50.647959 systemd-logind[1437]: Removed session 16. Jan 13 21:27:53.479702 kubelet[2578]: E0113 21:27:53.479665 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:55.651457 systemd[1]: Started sshd@16-10.0.0.116:22-10.0.0.1:38108.service - OpenSSH per-connection server daemon (10.0.0.1:38108). Jan 13 21:27:55.691929 sshd[5772]: Accepted publickey for core from 10.0.0.1 port 38108 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:55.693689 sshd[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:55.697749 systemd-logind[1437]: New session 17 of user core. Jan 13 21:27:55.703388 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:27:55.820629 sshd[5772]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:55.832597 systemd[1]: sshd@16-10.0.0.116:22-10.0.0.1:38108.service: Deactivated successfully. Jan 13 21:27:55.834529 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:27:55.835534 systemd-logind[1437]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:27:55.852011 systemd[1]: Started sshd@17-10.0.0.116:22-10.0.0.1:38112.service - OpenSSH per-connection server daemon (10.0.0.1:38112). Jan 13 21:27:55.852964 systemd-logind[1437]: Removed session 17. Jan 13 21:27:55.879150 sshd[5787]: Accepted publickey for core from 10.0.0.1 port 38112 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:55.880845 sshd[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:55.885419 systemd-logind[1437]: New session 18 of user core. Jan 13 21:27:55.894373 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:27:56.084909 sshd[5787]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:56.094312 systemd[1]: sshd@17-10.0.0.116:22-10.0.0.1:38112.service: Deactivated successfully. Jan 13 21:27:56.097090 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:27:56.098910 systemd-logind[1437]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:27:56.106814 systemd[1]: Started sshd@18-10.0.0.116:22-10.0.0.1:38124.service - OpenSSH per-connection server daemon (10.0.0.1:38124). Jan 13 21:27:56.108658 systemd-logind[1437]: Removed session 18. Jan 13 21:27:56.139000 sshd[5800]: Accepted publickey for core from 10.0.0.1 port 38124 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:56.140914 sshd[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:56.146406 systemd-logind[1437]: New session 19 of user core. Jan 13 21:27:56.151432 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:27:57.699809 sshd[5800]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:57.718646 systemd[1]: Started sshd@19-10.0.0.116:22-10.0.0.1:44060.service - OpenSSH per-connection server daemon (10.0.0.1:44060). Jan 13 21:27:57.719225 systemd[1]: sshd@18-10.0.0.116:22-10.0.0.1:38124.service: Deactivated successfully. Jan 13 21:27:57.721225 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:27:57.725216 systemd-logind[1437]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:27:57.728048 systemd-logind[1437]: Removed session 19. Jan 13 21:27:57.759177 sshd[5818]: Accepted publickey for core from 10.0.0.1 port 44060 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:57.761028 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:57.765419 systemd-logind[1437]: New session 20 of user core. Jan 13 21:27:57.777396 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:27:57.994687 sshd[5818]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:58.002042 systemd[1]: sshd@19-10.0.0.116:22-10.0.0.1:44060.service: Deactivated successfully. Jan 13 21:27:58.005180 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:27:58.006966 systemd-logind[1437]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:27:58.014569 systemd[1]: Started sshd@20-10.0.0.116:22-10.0.0.1:44070.service - OpenSSH per-connection server daemon (10.0.0.1:44070). Jan 13 21:27:58.015431 systemd-logind[1437]: Removed session 20. Jan 13 21:27:58.045230 sshd[5834]: Accepted publickey for core from 10.0.0.1 port 44070 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:58.047337 sshd[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:58.052949 systemd-logind[1437]: New session 21 of user core. Jan 13 21:27:58.057408 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:27:58.174924 sshd[5834]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:58.180143 systemd[1]: sshd@20-10.0.0.116:22-10.0.0.1:44070.service: Deactivated successfully. Jan 13 21:27:58.182694 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:27:58.183597 systemd-logind[1437]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:27:58.184850 systemd-logind[1437]: Removed session 21. Jan 13 21:28:03.186749 systemd[1]: Started sshd@21-10.0.0.116:22-10.0.0.1:44074.service - OpenSSH per-connection server daemon (10.0.0.1:44074). Jan 13 21:28:03.223757 sshd[5856]: Accepted publickey for core from 10.0.0.1 port 44074 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:03.225864 sshd[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:03.231080 systemd-logind[1437]: New session 22 of user core. Jan 13 21:28:03.237527 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:28:03.359428 sshd[5856]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:03.364532 systemd[1]: sshd@21-10.0.0.116:22-10.0.0.1:44074.service: Deactivated successfully. Jan 13 21:28:03.367355 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:28:03.368228 systemd-logind[1437]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:28:03.369391 systemd-logind[1437]: Removed session 22. Jan 13 21:28:06.667381 kubelet[2578]: E0113 21:28:06.667344 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:08.376026 systemd[1]: Started sshd@22-10.0.0.116:22-10.0.0.1:44628.service - OpenSSH per-connection server daemon (10.0.0.1:44628). Jan 13 21:28:08.412238 sshd[5897]: Accepted publickey for core from 10.0.0.1 port 44628 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:08.414121 sshd[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:08.418759 systemd-logind[1437]: New session 23 of user core. Jan 13 21:28:08.427429 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:28:08.537629 sshd[5897]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:08.541327 systemd[1]: sshd@22-10.0.0.116:22-10.0.0.1:44628.service: Deactivated successfully. Jan 13 21:28:08.543279 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:28:08.543927 systemd-logind[1437]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:28:08.544863 systemd-logind[1437]: Removed session 23. Jan 13 21:28:12.252716 kubelet[2578]: I0113 21:28:12.252636 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:28:13.550685 systemd[1]: Started sshd@23-10.0.0.116:22-10.0.0.1:44644.service - OpenSSH per-connection server daemon (10.0.0.1:44644). Jan 13 21:28:13.589505 sshd[5915]: Accepted publickey for core from 10.0.0.1 port 44644 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:13.591535 sshd[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:13.596066 systemd-logind[1437]: New session 24 of user core. Jan 13 21:28:13.604666 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:28:13.668380 kubelet[2578]: E0113 21:28:13.668250 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:13.722233 sshd[5915]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:13.728098 systemd[1]: sshd@23-10.0.0.116:22-10.0.0.1:44644.service: Deactivated successfully. Jan 13 21:28:13.731823 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:28:13.733442 systemd-logind[1437]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:28:13.734662 systemd-logind[1437]: Removed session 24. Jan 13 21:28:18.747737 systemd[1]: Started sshd@24-10.0.0.116:22-10.0.0.1:57984.service - OpenSSH per-connection server daemon (10.0.0.1:57984). Jan 13 21:28:18.781896 sshd[5929]: Accepted publickey for core from 10.0.0.1 port 57984 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:18.783805 sshd[5929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:18.788761 systemd-logind[1437]: New session 25 of user core. Jan 13 21:28:18.801580 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:28:18.926568 sshd[5929]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:18.930253 systemd[1]: sshd@24-10.0.0.116:22-10.0.0.1:57984.service: Deactivated successfully. Jan 13 21:28:18.933212 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:28:18.935519 systemd-logind[1437]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:28:18.937063 systemd-logind[1437]: Removed session 25.